Hubbry Logo
search
logo

Earthquake engineering

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Earthquake engineering is an interdisciplinary branch of engineering that designs and analyzes structures, such as buildings and bridges, with earthquakes in mind. Its overall goal is to make such structures more resistant to earthquakes. An earthquake (or seismic) engineer aims to construct structures that will not be damaged in minor shaking and will avoid serious damage or collapse in a major earthquake. A properly engineered structure does not necessarily have to be extremely strong or expensive. It has to be properly designed to withstand the seismic effects while sustaining an acceptable level of damage.

Definition

[edit]

Earthquake engineering is a scientific field concerned with protecting society, the natural environment, and the man-made environment from earthquakes by limiting the seismic risk to socio-economically acceptable levels.[1] Traditionally, it has been narrowly defined as the study of the behavior of structures and geo-structures subject to seismic loading; it is considered as a subset of structural engineering, geotechnical engineering, mechanical engineering, chemical engineering, applied physics, etc. However, the tremendous costs experienced in recent earthquakes have led to an expansion of its scope to encompass disciplines from the wider field of civil engineering, mechanical engineering, nuclear engineering, and from the social sciences, especially sociology, political science, economics, and finance.[2][3]

The main objectives of earthquake engineering are:

  • Foresee the potential consequences of strong earthquakes on urban areas and civil infrastructure.
  • Design, construct and maintain structures to perform at earthquake exposure up to the expectations and in compliance with building codes.[4]
Shake-table crash testing of a regular building model (left) and a base-isolated building model (right)[5] at UCSD

Seismic loading

[edit]
Tokyo Skytree, equipped with a tuned mass damper, is the world's tallest tower and is the world's third tallest structure.

Seismic loading means application of an earthquake-generated excitation on a structure (or geo-structure). It happens at contact surfaces of a structure either with the ground,[6] with adjacent structures,[7] or with gravity waves from tsunami. The loading that is expected at a given location on the Earth's surface is estimated by engineering seismology. It is related to the seismic hazard of the location.

Seismic performance

[edit]

Earthquake or seismic performance defines a structure's ability to sustain its main functions, such as its safety and serviceability, at and after a particular earthquake exposure. A structure is normally considered safe if it does not endanger the lives and well-being of those in or around it by partially or completely collapsing. A structure may be considered serviceable if it is able to fulfill its operational functions for which it was designed.

Basic concepts of the earthquake engineering, implemented in the major building codes, assume that a building should survive a rare, very severe earthquake by sustaining significant damage but without globally collapsing.[8] On the other hand, it should remain operational for more frequent, but less severe seismic events.

Seismic performance assessment

[edit]

Engineers need to know the quantified level of the actual or anticipated seismic performance associated with the direct damage to an individual building subject to a specified ground shaking. Such an assessment may be performed either experimentally or analytically.[citation needed]

Experimental assessment

[edit]

Experimental evaluations are expensive tests that are typically done by placing a (scaled) model of the structure on a shake-table that simulates the earth shaking and observing its behavior.[9] Such kinds of experiments were first performed more than a century ago.[10] Only recently has it become possible to perform 1:1 scale testing on full structures.

Due to the costly nature of such tests, they tend to be used mainly for understanding the seismic behavior of structures, validating models and verifying analysis methods. Thus, once properly validated, computational models and numerical procedures tend to carry the major burden for the seismic performance assessment of structures.

Analytical/Numerical assessment

[edit]
Snapshot from shake-table video of a 6-story non-ductile concrete building destructive testing

Seismic performance assessment or seismic structural analysis is a powerful tool of earthquake engineering which utilizes detailed modelling of the structure together with methods of structural analysis to gain a better understanding of seismic performance of building and non-building structures. The technique as a formal concept is a relatively recent development.

In general, seismic structural analysis is based on the methods of structural dynamics.[11] For decades, the most prominent instrument of seismic analysis has been the earthquake response spectrum method which also contributed to the proposed building code's concept of today.[12]

However, such methods are good only for linear elastic systems, being largely unable to model the structural behavior when damage (i.e., non-linearity) appears. Numerical step-by-step integration proved to be a more effective method of analysis for multi-degree-of-freedom structural systems with significant non-linearity under a transient process of ground motion excitation.[13] Use of the finite element method is one of the most common approaches for analyzing non-linear soil structure interaction computer models.

Basically, numerical analysis is conducted in order to evaluate the seismic performance of buildings. Performance evaluations are generally carried out by using nonlinear static pushover analysis or nonlinear time-history analysis. In such analyses, it is essential to achieve accurate non-linear modeling of structural components such as beams, columns, beam-column joints, shear walls etc. Thus, experimental results play an important role in determining the modeling parameters of individual components, especially those that are subject to significant non-linear deformations. The individual components are then assembled to create a full non-linear model of the structure. Thus created models are analyzed to evaluate the performance of buildings.[citation needed]

The capabilities of the structural analysis software are a major consideration in the above process as they restrict the possible component models, the analysis methods available and, most importantly, the numerical robustness. The latter becomes a major consideration for structures that venture into the non-linear range and approach global or local collapse as the numerical solution becomes increasingly unstable and thus difficult to reach. There are several commercially available Finite Element Analysis software's such as CSI-SAP2000 and CSI-PERFORM-3D, MTR/SASSI, Scia Engineer-ECtools, ABAQUS, and Ansys, all of which can be used for the seismic performance evaluation of buildings. Moreover, there is research-based finite element analysis platforms such as OpenSees, MASTODON, which is based on the MOOSE Framework, RUAUMOKO and the older DRAIN-2D/3D, several of which are now open source.[citation needed]

Research for earthquake engineering

[edit]
Shake-table testing of Friction Pendulum Bearings at EERC

Research for earthquake engineering means both field and analytical investigation or experimentation intended for discovery and scientific explanation of earthquake engineering related facts, revision of conventional concepts in the light of new findings, and practical application of the developed theories.

The National Science Foundation (NSF) is the main United States government agency that supports fundamental research and education in all fields of earthquake engineering. In particular, it focuses on experimental, analytical and computational research on design and performance enhancement of structural systems.

The Earthquake Engineering Research Institute (EERI) is a leader in dissemination of earthquake engineering research related information both in the U.S. and globally.

A definitive list of earthquake engineering research related shaking tables around the world may be found in Experimental Facilities for Earthquake Engineering Simulation Worldwide.[14] The most prominent of them is now E-Defense Shake Table in Japan.[15]

Major U.S. research programs

[edit]

NSF also supports the George E. Brown Jr. Network for Earthquake Engineering Simulation

The NSF Hazard Mitigation and Structural Engineering program (HMSE) supports research on new technologies for improving the behaviour and response of structural systems subject to earthquake hazards; fundamental research on safety and reliability of constructed systems; innovative developments in analysis and model based simulation of structural behaviour and response including soil-structure interaction; design concepts that improve structure performance and flexibility; and application of new control techniques for structural systems.[16]

(NEES) that advances knowledge discovery and innovation for earthquakes and tsunami loss reduction of the nation's civil infrastructure and new experimental simulation techniques and instrumentation.[17]

The NEES network features 14 geographically distributed, shared-use laboratories that support several types of experimental work:[17] geotechnical centrifuge research, shake-table tests, large-scale structural testing, tsunami wave basin experiments, and field site research.[18] Participating universities include: Cornell University; Lehigh University; Oregon State University; Rensselaer Polytechnic Institute; University at Buffalo, State University of New York; University of California, Berkeley; University of California, Davis; University of California, Los Angeles; University of California, San Diego; University of California, Santa Barbara; University of Illinois, Urbana-Champaign; University of Minnesota; University of Nevada, Reno; and the University of Texas, Austin.[17]

NEES at Buffalo testing facility

The equipment sites (labs) and a central data repository are connected to the global earthquake engineering community via the NEEShub website. The NEES website is powered by HUBzero software developed at Purdue University for nanoHUB specifically to help the scientific community share resources and collaborate. The cyberinfrastructure, connected via Internet2, provides interactive simulation tools, a simulation tool development area, a curated central data repository, animated presentations, user support, telepresence, mechanism for uploading and sharing resources, and statistics about users and usage patterns.

This cyberinfrastructure allows researchers to: securely store, organize and share data within a standardized framework in a central location; remotely observe and participate in experiments through the use of synchronized real-time data and video; collaborate with colleagues to facilitate the planning, performance, analysis, and publication of research experiments; and conduct computational and hybrid simulations that may combine the results of multiple distributed experiments and link physical experiments with computer simulations to enable the investigation of overall system performance.

These resources jointly provide the means for collaboration and discovery to improve the seismic design and performance of civil and mechanical infrastructure systems.

Earthquake simulation

[edit]

The very first earthquake simulations were performed by statically applying some horizontal inertia forces based on scaled peak ground accelerations to a mathematical model of a building.[19] With the further development of computational technologies, static approaches began to give way to dynamic ones.

Dynamic experiments on building and non-building structures may be physical, like shake-table testing, or virtual ones. In both cases, to verify a structure's expected seismic performance, some researchers prefer to deal with so called "real time-histories" though the last cannot be "real" for a hypothetical earthquake specified by either a building code or by some particular research requirements. Therefore, there is a strong incentive to engage an earthquake simulation which is the seismic input that possesses only essential features of a real event.

Sometimes earthquake simulation is understood as a re-creation of local effects of a strong earth shaking.

Structure simulation

[edit]
Concurrent experiments with two building models which are kinematically equivalent to a real prototype[20]

Theoretical or experimental evaluation of anticipated seismic performance mostly requires a structure simulation which is based on the concept of structural likeness or similarity. Similarity is some degree of analogy or resemblance between two or more objects. The notion of similarity rests either on exact or approximate repetitions of patterns in the compared items.

In general, a building model is said to have similarity with the real object if the two share geometric similarity, kinematic similarity and dynamic similarity. The most vivid and effective type of similarity is the kinematic one. Kinematic similarity exists when the paths and velocities of moving particles of a model and its prototype are similar.

The ultimate level of kinematic similarity is kinematic equivalence when, in the case of earthquake engineering, time-histories of each story lateral displacements of the model and its prototype would be the same.

Seismic vibration control

[edit]

Seismic vibration control is a set of technical means aimed to mitigate seismic impacts in building and non-building structures. All seismic vibration control devices may be classified as passive, active or hybrid[21] where:

  • passive control devices have no feedback capability between them, structural elements and the ground;
  • active control devices incorporate real-time recording instrumentation on the ground integrated with earthquake input processing equipment and actuators within the structure;
  • hybrid control devices have combined features of active and passive control systems.[22]

When ground seismic waves reach up and start to penetrate a base of a building, their energy flow density, due to reflections, reduces dramatically: usually, up to 90%. However, the remaining portions of the incident waves during a major earthquake still bear a huge devastating potential.

After the seismic waves enter a superstructure, there are a number of ways to control them in order to soothe their damaging effect and improve the building's seismic performance, for instance:

Mausoleum of Cyrus, the oldest base-isolated structure in the world

Devices of the last kind, abbreviated correspondingly as TMD for the tuned (passive), as AMD for the active, and as HMD for the hybrid mass dampers, have been studied and installed in high-rise buildings, predominantly in Japan, for a quarter of a century.[24]

However, there is quite another approach: partial suppression of the seismic energy flow into the superstructure known as seismic or base isolation.

For this, some pads are inserted into or under all major load-carrying elements in the base of the building which should substantially decouple a superstructure from its substructure resting on a shaking ground.

The first evidence of earthquake protection by using the principle of base isolation was discovered in Pasargadae, a city in ancient Persia, now Iran, and dates back to the 6th century BCE. Below, there are some samples of seismic vibration control technologies of today.

Dry-stone walls in Peru

[edit]
Dry-stone walls of Machu Picchu Temple of the Sun, Peru

Peru is a highly seismic land; for centuries the dry-stone construction proved to be more earthquake-resistant than using mortar. People of Inca civilization were masters of the polished 'dry-stone walls', called ashlar, where blocks of stone were cut to fit together tightly without any mortar. The Incas were among the best stonemasons the world has ever seen[25] and many junctions in their masonry were so perfect that even blades of grass could not fit between the stones.

The stones of the dry-stone walls built by the Incas could move slightly and resettle without the walls collapsing, a passive structural control technique employing both the principle of energy dissipation (coulomb damping) and that of suppressing resonant amplifications.[26]

Tuned mass damper

[edit]
Tuned mass damper in Taipei 101, the world's third tallest skyscraper

Typically the tuned mass dampers are huge concrete blocks mounted in skyscrapers or other structures and move in opposition to the resonance frequency oscillations of the structures by means of some sort of spring mechanism.

The Taipei 101 skyscraper needs to withstand typhoon winds and earthquake tremors common in this area of Asia/Pacific. For this purpose, a steel pendulum weighing 660 metric tonnes that serves as a tuned mass damper was designed and installed atop the structure. Suspended from the 92nd to the 88th floor, the pendulum sways to decrease resonant amplifications of lateral displacements in the building caused by earthquakes and strong gusts.

Hysteretic dampers

[edit]

A hysteretic damper is intended to provide better and more reliable seismic performance than that of a conventional structure by increasing the dissipation of seismic input energy.[27] There are five major groups of hysteretic dampers used for the purpose, namely:

  • Fluid viscous dampers (FVDs)

Viscous Dampers have the benefit of being a supplemental damping system. They have an oval hysteretic loop and the damping is velocity dependent. While some minor maintenance is potentially required, viscous dampers generally do not need to be replaced after an earthquake. While more expensive than other damping technologies they can be used for both seismic and wind loads and are the most commonly used hysteretic damper.[28]

  • Friction dampers (FDs)

Friction dampers tend to be available in two major types, linear and rotational and dissipate energy by heat. The damper operates on the principle of a coulomb damper. Depending on the design, friction dampers can experience stick-slip phenomenon and Cold welding. The main disadvantage being that friction surfaces can wear over time and for this reason they are not recommended for dissipating wind loads. When used in seismic applications wear is not a problem and there is no required maintenance. They have a rectangular hysteretic loop and as long as the building is sufficiently elastic they tend to settle back to their original positions after an earthquake.

  • Metallic yielding dampers (MYDs)

Metallic yielding dampers, as the name implies, yield in order to absorb the earthquake's energy. This type of damper absorbs a large amount of energy however they must be replaced after an earthquake and may prevent the building from settling back to its original position.

  • Viscoelastic dampers (VEDs)

Viscoelastic dampers are useful in that they can be used for both wind and seismic applications, they are usually limited to small displacements. There is some concern as to the reliability of the technology as some brands have been banned from use in buildings in the United States.

  • Straddling pendulum dampers (swing)

Base isolation

[edit]

Base isolation seeks to prevent the kinetic energy of the earthquake from being transferred into elastic energy in the building. These technologies do so by isolating the structure from the ground, thus enabling them to move somewhat independently. The degree to which the energy is transferred into the structure and how the energy is dissipated will vary depending on the technology used.

  • Lead rubber bearing
LRB being tested at the UCSD Caltrans-SRMD facility

Lead rubber bearing or LRB is a type of base isolation employing a heavy damping. It was invented by Bill Robinson, a New Zealander.[29]

Heavy damping mechanism incorporated in vibration control technologies and, particularly, in base isolation devices, is often considered a valuable source of suppressing vibrations thus enhancing a building's seismic performance. However, for the rather pliant systems such as base isolated structures, with a relatively low bearing stiffness but with a high damping, the so-called "damping force" may turn out the main pushing force at a strong earthquake. The video[30] shows a Lead Rubber Bearing being tested at the UCSD Caltrans-SRMD facility. The bearing is made of rubber with a lead core. It was a uniaxial test in which the bearing was also under a full structure load. Many buildings and bridges, both in New Zealand and elsewhere, are protected with lead dampers and lead and rubber bearings. Te Papa Tongarewa, the national museum of New Zealand, and the New Zealand Parliament Buildings have been fitted with the bearings. Both are in Wellington which sits on an active fault.[29]

  • Springs-with-damper base isolator
Springs-with-damper close-up

Springs-with-damper base isolator installed under a three-story town-house, Santa Monica, California is shown on the photo taken prior to the 1994 Northridge earthquake exposure. It is a base isolation device conceptually similar to Lead Rubber Bearing.

One of two three-story town-houses like this, which was well instrumented for recording of both vertical and horizontal accelerations on its floors and the ground, has survived a severe shaking during the Northridge earthquake and left valuable recorded information for further study.

  • Simple roller bearing

Simple roller bearing is a base isolation device which is intended for protection of various building and non-building structures against potentially damaging lateral impacts of strong earthquakes.

This metallic bearing support may be adapted, with certain precautions, as a seismic isolator to skyscrapers and buildings on soft ground. Recently, it has been employed under the name of metallic roller bearing for a housing complex (17 stories) in Tokyo, Japan.[31]

  • Friction pendulum bearing

Friction pendulum bearing (FPB) is another name of friction pendulum system (FPS). It is based on three pillars:[32]

  • articulated friction slider;
  • spherical concave sliding surface;
  • enclosing cylinder for lateral displacement restraint.

Snapshot with the link to video clip of a shake-table testing of FPB system supporting a rigid building model is presented at the right.

Seismic design

[edit]

Seismic design is based on authorized engineering procedures, principles and criteria meant to design or retrofit structures subject to earthquake exposure.[19] Those criteria are only consistent with the contemporary state of the knowledge about earthquake engineering structures.[33] Therefore, a building design which exactly follows seismic code regulations does not guarantee safety against collapse or serious damage.[34]

The price of poor seismic design may be enormous. Nevertheless, seismic design has always been a trial and error process whether it was based on physical laws or on empirical knowledge of the structural performance of different shapes and materials.

San Francisco City Hall destroyed by 1906 earthquake and fire
San Francisco after the 1906 earthquake and fire

To practice seismic design, seismic analysis or seismic evaluation of new and existing civil engineering projects, an engineer should, normally, pass examination on Seismic Principles[35] which, in the State of California, include:

  • Seismic Data and Seismic Design Criteria
  • Seismic Characteristics of Engineered Systems
  • Seismic Forces
  • Seismic Analysis Procedures
  • Seismic Detailing and Construction Quality Control

To build up complex structural systems,[36] seismic design largely uses the same relatively small number of basic structural elements (to say nothing of vibration control devices) as any non-seismic design project.

Normally, according to building codes, structures are designed to "withstand" the largest earthquake of a certain probability that is likely to occur at their location. This means the loss of life should be minimized by preventing collapse of the buildings.

Seismic design is carried out by understanding the possible failure modes of a structure and providing the structure with appropriate strength, stiffness, ductility, and configuration[37] to ensure those modes cannot occur.

Seismic design requirements

[edit]

Seismic design requirements depend on the type of the structure, locality of the project and its authorities which stipulate applicable seismic design codes and criteria.[8] For instance, California Department of Transportation's requirements called The Seismic Design Criteria (SDC) and aimed at the design of new bridges in California[38] incorporate an innovative seismic performance-based approach.

The Metsamor Nuclear Power Plant was closed after the 1988 Armenian earthquake.[39]

The most significant feature in the SDC design philosophy is a shift from a force-based assessment of seismic demand to a displacement-based assessment of demand and capacity. Thus, the newly adopted displacement approach is based on comparing the elastic displacement demand to the inelastic displacement capacity of the primary structural components while ensuring a minimum level of inelastic capacity at all potential plastic hinge locations.

In addition to the designed structure itself, seismic design requirements may include a ground stabilization underneath the structure: sometimes, heavily shaken ground breaks up which leads to collapse of the structure sitting upon it.[40] The following topics should be of primary concerns: liquefaction; dynamic lateral earth pressures on retaining walls; seismic slope stability; earthquake-induced settlement.[41]

Nuclear facilities should not jeopardise their safety in case of earthquakes or other hostile external events. Therefore, their seismic design is based on criteria far more stringent than those applying to non-nuclear facilities.[42] The Fukushima I nuclear accidents and damage to other nuclear facilities that followed the 2011 Tōhoku earthquake and tsunami have, however, drawn attention to ongoing concerns over Japanese nuclear seismic design standards and caused many other governments to re-evaluate their nuclear programs. Doubt has also been expressed over the seismic evaluation and design of certain other plants, including the Fessenheim Nuclear Power Plant in France.

Failure modes

[edit]

Failure mode is the manner by which an earthquake induced failure is observed. It, generally, describes the way the failure occurs. Though costly and time-consuming, learning from each real earthquake failure remains a routine recipe for advancement in seismic design methods. Below, some typical modes of earthquake-generated failures are presented.

Typical damage to unreinforced masonry buildings at earthquakes, Loma Prieta

The lack of reinforcement coupled with poor mortar and inadequate roof-to-wall ties can result in substantial damage to an unreinforced masonry building. Severely cracked or leaning walls are some of the most common earthquake damage. Also hazardous is the damage that may occur between the walls and roof or floor diaphragms. Separation between the framing and the walls can jeopardize the vertical support of roof and floor systems.

Soft story collapse due to inadequate shear strength at ground level, Loma Prieta earthquake

Soft story effect. Absence of adequate stiffness on the ground level caused damage to this structure. A close examination of the image reveals that the rough board siding, once covered by a brick veneer, has been completely dismantled from the studwall. Only the rigidity of the floor above combined with the support on the two hidden sides by continuous walls, not penetrated with large doors as on the street sides, is preventing full collapse of the structure.

Effects of soil liquefaction during the 1964 Niigata earthquake

Soil liquefaction. In the cases where the soil consists of loose granular deposited materials with the tendency to develop excessive hydrostatic pore water pressure of sufficient magnitude and compact, liquefaction of those loose saturated deposits may result in non-uniform settlements and tilting of structures. This caused major damage to thousands of buildings in Niigata, Japan during the 1964 earthquake.[43]

Car smashed by landslide rock, 2008 Sichuan earthquake

Landslide rock fall. A landslide is a geological phenomenon which includes a wide range of ground movement, including rock falls. Typically, the action of gravity is the primary driving force for a landslide to occur though in this case there was another contributing factor which affected the original slope stability: the landslide required an earthquake trigger before being released.

Effects of pounding against adjacent building, Loma Prieta

Pounding against adjacent building. This is a photograph of the collapsed five-story tower, St. Joseph's Seminary, Los Altos, California which resulted in one fatality. During Loma Prieta earthquake, the tower pounded against the independently vibrating adjacent building behind. A possibility of pounding depends on both buildings' lateral displacements which should be accurately estimated and accounted for.

Effects of completely shattered joints of concrete frame, Northridge

At Northridge earthquake, the Kaiser Permanente concrete frame office building had joints completely shattered, revealing inadequate confinement steel, which resulted in the second story collapse. In the transverse direction, composite end shear walls, consisting of two wythes of brick and a layer of shotcrete that carried the lateral load, peeled apart because of inadequate through-ties and failed.

Shifting from foundation, Whittier

Sliding off foundations effect of a relatively rigid residential building structure during 1987 Whittier Narrows earthquake. The magnitude 5.9 earthquake pounded the Garvey West Apartment building in Monterey Park, California and shifted its superstructure about 10 inches to the east on its foundation.

Earthquake damage in Pichilemu

If a superstructure is not mounted on a base isolation system, its shifting on the basement should be prevented.

Insufficient shear reinforcement led main rebars to buckle, Northridge.

Reinforced concrete column burst at Northridge earthquake due to insufficient shear reinforcement mode which allows main reinforcement to buckle outwards. The deck unseated at the hinge and failed in shear. As a result, the La Cienega-Venice underpass section of the 10 Freeway collapsed.

Support-columns and upper deck failure, Loma Prieta earthquake

Loma Prieta earthquake: side view of reinforced concrete support-columns failure which triggered the upper deck collapse onto the lower deck of the two-level Cypress viaduct of Interstate Highway 880, Oakland, CA.

Failure of retaining wall due to ground movement, Loma Prieta

Retaining wall failure at Loma Prieta earthquake in Santa Cruz Mountains area: prominent northwest-trending extensional cracks up to 12 cm (4.7 in) wide in the concrete spillway to Austrian Dam, the north abutment.

Lateral spreading mode of ground failure, Loma Prieta

Ground shaking triggered soil liquefaction in a subsurface layer of sand, producing differential lateral and vertical movement in an overlying carapace of unliquefied sand and silt. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage.[44]

Beams and pier columns diagonal cracking, 2008 Sichuan earthquake

Severely damaged building of Agriculture Development Bank of China after 2008 Sichuan earthquake: most of the beams and pier columns are sheared. Large diagonal cracks in masonry and veneer are due to in-plane loads while abrupt settlement of the right end of the building should be attributed to a landfill which may be hazardous even without any earthquake.[45]

Tsunami strikes Ao Nang[46]

Twofold tsunami impact: sea waves hydraulic pressure and inundation. Thus, the Indian Ocean earthquake of December 26, 2004, with the epicenter off the west coast of Sumatra, Indonesia, triggered a series of devastating tsunamis, killing more than 230,000 people in eleven countries by inundating surrounding coastal communities with huge waves up to 30 meters (100 feet) high.[47]

Earthquake-resistant construction

[edit]

Earthquake construction means implementation of seismic design to enable building and non-building structures to live through the anticipated earthquake exposure up to the expectations and in compliance with the applicable building codes.

Construction of Pearl River Tower X-bracing to resist lateral forces of earthquakes and winds

Design and construction are intimately related. To achieve a good workmanship, detailing of the members and their connections should be as simple as possible. As any construction in general, earthquake construction is a process that consists of the building, retrofitting or assembling of infrastructure given the construction materials available.[48]

The destabilizing action of an earthquake on constructions may be direct (seismic motion of the ground) or indirect (earthquake-induced landslides, soil liquefaction and waves of tsunami).

A structure might have all the appearances of stability, yet offer nothing but danger when an earthquake occurs.[49] The crucial fact is that, for safety, earthquake-resistant construction techniques are as important as quality control and using correct materials. Earthquake contractor should be registered in the state/province/country of the project location (depending on local regulations), bonded and insured [citation needed].

To minimize possible losses, construction process should be organized with keeping in mind that earthquake may strike any time prior to the end of construction.

Each construction project requires a qualified team of professionals who understand the basic features of seismic performance of different structures as well as construction management.

Adobe structures

[edit]
Partially collapsed adobe building in Westmorland, California

Around thirty percent of the world's population lives or works in earth-made construction.[50] Adobe type of mud bricks is one of the oldest and most widely used building materials. The use of adobe is very common in some of the world's most hazard-prone regions, traditionally across Latin America, Africa, Indian subcontinent and other parts of Asia, Middle East and Southern Europe.

Adobe buildings are considered very vulnerable at strong quakes.[51] However, multiple ways of seismic strengthening of new and existing adobe buildings are available.[52]

Key factors for the improved seismic performance of adobe construction are:

  • Quality of construction.
  • Compact, box-type layout.
  • Seismic reinforcement.[53]

Limestone and sandstone structures

[edit]
Base-isolated City and County Building, Salt Lake City, Utah

Limestone is very common in architecture, especially in North America and Europe. Many landmarks across the world are made of limestone. Many medieval churches and castles in Europe are made of limestone and sandstone masonry. They are the long-lasting materials but their rather heavy weight is not beneficial for adequate seismic performance.

Application of modern technology to seismic retrofitting can enhance the survivability of unreinforced masonry structures. As an example, from 1973 to 1989, the Salt Lake City and County Building in Utah was exhaustively renovated and repaired with an emphasis on preserving historical accuracy in appearance. This was done in concert with a seismic upgrade that placed the weak sandstone structure on base isolation foundation to better protect it from earthquake damage.

Timber frame structures

[edit]
Anne Hvide's House, Denmark (1560)

Timber framing dates back thousands of years, and has been used in many parts of the world during various periods such as ancient Japan, Europe and medieval England in localities where timber was in good supply and building stone and the skills to work it were not.

The use of timber framing in buildings provides their complete skeletal framing which offers some structural benefits as the timber frame, if properly engineered, lends itself to better seismic survivability.[54]

Light-frame structures

[edit]
A two-story wooden-frame for a residential building structure

Light-frame structures usually gain seismic resistance from rigid plywood shear walls and wood structural panel diaphragms.[55] Special provisions for seismic load-resisting systems for all engineered wood structures requires consideration of diaphragm ratios, horizontal and vertical diaphragm shears, and connector/fastener values. In addition, collectors, or drag struts, to distribute shear along a diaphragm length are required.

Reinforced masonry structures

[edit]
Reinforced hollow masonry wall

A construction system where steel reinforcement is embedded in the mortar joints of masonry or placed in holes and that are filled with concrete or grout is called reinforced masonry.[56] There are various practices and techniques to reinforce masonry. The most common type is the reinforced hollow unit masonry.

To achieve a ductile behavior in masonry, it is necessary that the shear strength of the wall is greater than the flexural strength.[57] The effectiveness of both vertical and horizontal reinforcements depends on the type and quality of the masonry units and mortar.

The devastating 1933 Long Beach earthquake revealed that masonry is prone to earthquake damage, which led to the California State Code making masonry reinforcement mandatory across California.

Reinforced concrete structures

[edit]
Stressed Ribbon pedestrian bridge over the Rogue River, Grants Pass, Oregon
Prestressed concrete cable-stayed bridge over Yangtze river

Reinforced concrete is concrete in which steel reinforcement bars (rebars) or fibers have been incorporated to strengthen a material that would otherwise be brittle. It can be used to produce beams, columns, floors or bridges.

Prestressed concrete is a kind of reinforced concrete used for overcoming concrete's natural weakness in tension. It can be applied to beams, floors or bridges with a longer span than is practical with ordinary reinforced concrete. Prestressing tendons (generally of high tensile steel cable or rods) are used to provide a clamping load which produces a compressive stress that offsets the tensile stress that the concrete compression member would, otherwise, experience due to a bending load.

To prevent catastrophic collapse in response earth shaking (in the interest of life safety), a traditional reinforced concrete frame should have ductile joints. Depending upon the methods used and the imposed seismic forces, such buildings may be immediately usable, require extensive repair, or may have to be demolished.

Prestressed structures

[edit]

Prestressed structure is the one whose overall integrity, stability and security depend, primarily, on a prestressing. Prestressing means the intentional creation of permanent stresses in a structure for the purpose of improving its performance under various service conditions.[58]

Naturally pre-compressed exterior wall of Colosseum, Rome

There are the following basic types of prestressing:

  • Pre-compression (mostly, with the own weight of a structure)
  • Pretensioning with high-strength embedded tendons
  • Post-tensioning with high-strength bonded or unbonded tendons

Today, the concept of prestressed structure is widely engaged in design of buildings, underground structures, TV towers, power stations, floating storage and offshore facilities, nuclear reactor vessels, and numerous kinds of bridge systems.[59]

A beneficial idea of prestressing was, apparently, familiar to the ancient Roman architects; look, e.g., at the tall attic wall of Colosseum working as a stabilizing device for the wall piers beneath.

Steel structures

[edit]
Collapsed section of the San Francisco–Oakland Bay Bridge in response to Loma Prieta earthquake

Steel structures are considered mostly earthquake resistant but some failures have occurred. A great number of welded steel moment-resisting frame buildings, which looked earthquake-proof, surprisingly experienced brittle behavior and were hazardously damaged in the 1994 Northridge earthquake.[60] After that, the Federal Emergency Management Agency (FEMA) initiated development of repair techniques and new design approaches to minimize damage to steel moment frame buildings in future earthquakes.[61]

For structural steel seismic design based on Load and Resistance Factor Design (LRFD) approach, it is very important to assess ability of a structure to develop and maintain its bearing resistance in the inelastic range. A measure of this ability is ductility, which may be observed in a material itself, in a structural element, or to a whole structure.

As a consequence of Northridge earthquake experience, the American Institute of Steel Construction has introduced AISC 358 "Pre-Qualified Connections for Special and intermediate Steel Moment Frames." The AISC Seismic Design Provisions require that all Steel Moment Resisting Frames employ either connections contained in AISC 358, or the use of connections that have been subjected to pre-qualifying cyclic testing.[62]

Prediction of earthquake losses

[edit]

Earthquake loss estimation is usually defined as a Damage Ratio (DR) which is a ratio of the earthquake damage repair cost to the total value of a building.[63] Probable Maximum Loss (PML) is a common term used for earthquake loss estimation, but it lacks a precise definition. In 1999, ASTM E2026 'Standard Guide for the Estimation of Building Damageability in Earthquakes' was produced in order to standardize the nomenclature for seismic loss estimation, as well as establish guidelines as to the review process and qualifications of the reviewer.[64]

Earthquake loss estimations are also referred to as Seismic Risk Assessments. The risk assessment process generally involves determining the probability of various ground motions coupled with the vulnerability or damage of the building under those ground motions. The results are defined as a percent of building replacement value.[65]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Earthquake engineering is an interdisciplinary field of civil engineering dedicated to the analysis, design, and retrofitting of structures to withstand seismic forces generated by earthquakes, thereby minimizing structural damage, economic loss, and human casualties.[1] The discipline integrates principles from seismology, structural dynamics, geotechnical engineering, and materials science to model ground motions, assess vulnerabilities, and implement mitigation strategies such as ductile detailing and energy dissipation systems.[2] Emerging prominently in the early 20th century after events like the 1906 San Francisco earthquake, it has evolved through empirical observations of failures and advancements in computational modeling, leading to seismic building codes that enforce minimum performance standards based on probabilistic hazard assessments.[3][4] Key methodologies in earthquake engineering emphasize dynamic analysis over static equivalents, recognizing that seismic loading induces vibrations at multiple frequencies, necessitating response spectrum approaches and time-history simulations for accurate prediction of structural behavior.[5] Innovations such as base isolation, which decouples superstructures from foundation motions using rubber bearings or sliding pads, and tuned mass dampers, which counteract sway in tall buildings, represent defining achievements that have demonstrably enhanced resilience, as evidenced by the survival of structures like Taipei 101 during typhoons and simulated seismic events.[6][7] Performance-based design paradigms, shifting from uniform hazard to site-specific risk tolerance, allow for tailored solutions that balance safety with functionality, though debates persist over the conservatism of prescriptive codes versus the uncertainties in nonlinear modeling.[8] Despite progress, challenges remain in addressing soil-structure interaction effects like liquefaction and in scaling laboratory shake-table tests to real-world scenarios, underscoring the field's reliance on iterative validation against post-event reconnaissance data.[9][10]

History

Origins and Early Developments

![Cyrus tomb.jpg][float-right] The earliest known application of earthquake-resistant principles dates to the 6th century BC in ancient Persia, where the Tomb of Cyrus the Great in Pasargadae employed a form of base isolation. This structure features six layers of precisely cut stone blocks separated by sheets of metal, likely lead or copper, which allowed the upper chamber to slide independently of the foundation during seismic events, dissipating energy and preventing collapse.[11][12] Similar empirical techniques emerged in other seismically active regions, such as Japan's multi-story pagodas with flexible wooden joints and a central core pillar that absorbed shocks, and China's dougong bracket systems that provided elasticity to timber frames.[13][14] Scientific foundations for earthquake engineering developed in the 19th century, pioneered by Irish civil engineer Robert Mallet (1810–1881), who conducted pioneering studies on seismic wave propagation and structural response. Mallet performed controlled explosions to simulate earthquakes, measured ground motions, and produced the first detailed seismic maps of the Mediterranean region following the 1857 Basilicata earthquake in Italy, establishing key concepts in seismology that informed later engineering practices.[15][16] Early modern advancements accelerated after major 20th-century earthquakes, particularly in Japan, where frequent seismicity prompted systematic regulation. The 1891 Nobi earthquake (magnitude 8.0) highlighted vulnerabilities in rigid masonry, leading to initial design guidelines emphasizing ductility; this culminated in Japan's first national seismic building code in 1924, following the 1923 Great Kantō earthquake (magnitude 7.9), which incorporated lateral force coefficients based on empirical observations of structural failures.[17][18] In the United States, the 1906 San Francisco earthquake (magnitude 7.9) exposed the dangers of unreinforced masonry and soft-story failures, prompting restrictions on such construction but delaying formal seismic provisions until the 1927 Uniform Building Code appendix introduced voluntary lateral load requirements.[19] The 1933 Long Beach earthquake (magnitude 6.4) then spurred California's Field Act, mandating seismic design for public schools and marking the first enforced U.S. regulations tying building forces to acceleration estimates derived from observed damages.[20][21]

Key Milestones Post-Major Earthquakes

The 1906 San Francisco earthquake, with a magnitude of 7.9, prompted the first integrated scientific investigation of a major seismic event, culminating in Harry Fielding Reid's formulation of the elastic rebound theory in 1910, which explained earthquake generation through fault slip and became foundational for seismic hazard analysis.[22] This disaster highlighted the vulnerability of unreinforced masonry structures, influencing the evolution of early building codes; although San Francisco initially prioritized rapid reconstruction without stringent seismic requirements, it spurred regional advancements, including the 1925 Santa Barbara ordinance, recognized as the first comprehensive seismic building code in the United States.[19] The 1933 Long Beach earthquake, magnitude 6.4, exposed severe deficiencies in school buildings constructed of unreinforced masonry, resulting in the immediate passage of California's Field Act on April 29, 1933, which mandated seismic-resistant design standards for public school construction and retrofitting, setting a precedent for separating design from construction to ensure compliance.[23] This legislation significantly reduced collapse risks in educational facilities during subsequent events and influenced broader adoption of ductility and lateral force resistance principles in public infrastructure.[24] The 1971 San Fernando earthquake, magnitude 6.6, demonstrated failures in nonductile reinforced concrete frames and overestimation of structural capacities under dynamic loading, leading to the Applied Technology Council (ATC) 3 project in 1973 and major revisions in the 1976 Uniform Building Code, which increased seismic design forces by factors of up to 2.5 times and emphasized capacity design to protect brittle elements.[25] These changes addressed observed pancake collapses and shifted focus toward ensuring ductile behavior in high-seismic zones.[26] Subsequent earthquakes accelerated the transition to performance-based seismic design (PBSD). The 1994 Northridge earthquake, magnitude 6.7, revealed widespread issues with welded steel moment-resisting frames and nonductile concrete, formalizing national efforts for PBSD frameworks that define multiple performance objectives, such as immediate occupancy or collapse prevention, beyond prescriptive life-safety minima.[27] Similarly, the 1995 Kobe earthquake, magnitude 6.9, caused extensive damage despite prior codes, prompting Japanese standards to incorporate enhanced ductility verification, near-fault effects, and widespread adoption of base isolation and damping systems, while emphasizing reparable damage over collapse prevention.[28] These events underscored the limitations of uniform hazard spectra, driving probabilistic risk-targeted approaches in modern codes.[29]

Fundamentals

Seismic Loading and Ground Motions

Seismic loading comprises the dynamic forces imposed on structures by earthquake-induced accelerations of the ground, primarily arising from inertial resistance to this motion as per F = m × a, where m is mass and a is acceleration. Ground motions manifest as transient, multi-frequency oscillations of the earth's surface, propagated via body waves (P and S waves) and surface waves (Love and Rayleigh waves), with horizontal components typically dominating structural demands due to their alignment with lateral stiffness. Vertical motions, though generally smaller (about 50-70% of horizontal), can influence axial loads and uplift in certain configurations.[30] Characteristics of ground motions include amplitude measures such as peak ground acceleration (PGA), the maximum recorded acceleration expressed as a fraction of g (Earth's gravity, approximately 9.81 m/s²), which indicates shaking intensity for rigid structures or ground particles. For example, PGA values exceeding 0.50g, as observed in events like the 1994 Northridge earthquake (up to 1.78g at Pacoima Dam), correlate with severe damage potential, though survivable with damping. Peak ground velocity (PGV) and displacement (PGD) capture velocity-sensitive and displacement-demanding aspects, respectively, with PGV often better predicting damage in moderate-to-long period structures. Duration, quantified via significant duration or Arias intensity (cumulative energy), influences cyclic loading and fatigue, extending beyond 30 seconds in some subduction zone events. Frequency content, reflected in Fourier or response spectra, varies with source (e.g., high-frequency from crustal faults, low-frequency from distant events) and site conditions.[31][30][32] Site effects amplify motions: soft soils extend predominant periods (0.4-2.0 s) and boost amplitudes by 2-6 times relative to rock sites, as evidenced in the 1985 Mexico City earthquake where lakebed amplification at 2 s period caused resonant collapse of mid-rise buildings. Near-source phenomena, including forward directivity (high-velocity pulses) and hanging-wall effects, can elevate long-period spectral ordinates by up to 2 times, per ground motion prediction equations (GMPEs). Probabilistic seismic hazard analysis (PSHA) derives design values like PGA or 5% damped spectral accelerations (Sa) for return periods such as 475 years (10% exceedance in 50 years), using deaggregation to identify controlling magnitude-distance scenarios.[33][30][32] In engineering practice, response spectra condense time histories into envelope curves of maximum SDOF oscillator responses versus period, enabling modal superposition for multi-degree-of-freedom systems under the assumption of linearity. Design spectra, code-specified (e.g., ASCE 7), incorporate factors for soil class, importance, and response modification to scale hazard-consistent motions. Loading computation employs equivalent static methods for regular, low-rise structures (V = Cs × W, where Cs derives from Sa/T and limits), response spectrum analysis for dynamic distribution, or time-history integration for nonlinear or irregular cases, ensuring demands do not exceed capacity with specified safety margins. Empirical databases like PEER NGA-West2 validate these through recorded motions from over 20,000 events.[33][34][32]

Structural Dynamics and Response Spectra

Structural dynamics examines the response of structures to time-varying forces, particularly those from earthquakes, which introduce transient ground motions as base excitations.[35] The governing equations derive from Newton's second law applied to structural systems, typically formulated for single-degree-of-freedom (SDOF) oscillators as $ m \ddot{u} + c \dot{u} + k u = -m \ddot{u}_g $, where $ m $, $ c $, and $ k $ represent mass, viscous damping, and stiffness, respectively, $ u $ is relative displacement, and $ \ddot{u}_g $ is ground acceleration.[36] Solutions involve free vibration characteristics (natural frequency $ \omega_n = \sqrt{k/m} $, damping ratio $ \zeta = c/(2\sqrt{km}) $) and forced vibration responses computed via convolution integrals or modal superposition for multi-degree-of-freedom (MDOF) systems.[1] In earthquake engineering, structural dynamics underpins the prediction of inelastic deformations and failure risks, distinguishing dynamic amplification from static effects; for instance, resonance occurs when structural periods align with dominant ground motion periods, amplifying responses by factors up to 2-3 times static equivalents in flexible structures.[35] Modal analysis decomposes MDOF responses into contributions from individual modes, each treated as an SDOF system, enabling efficient computation of peak displacements, velocities, and accelerations.[5] Response spectra provide a frequency-domain representation of earthquake demands, plotting the maximum absolute response (displacement $ S_d $, velocity $ S_v $, or pseudo-acceleration $ S_a = \omega_n^2 S_d $) of SDOF oscillators across a range of natural periods $ T_n = 2\pi / \omega_n $ or frequencies, for a given damping ratio and specific ground motion record.[37] Originating from Maurice A. Biot's 1932 formulation, the method computes spectra by solving the SDOF equation for each period and extracting envelope maxima, offering a compact summary superior to time histories for design as it envelopes worst-case responses without phase dependency.[38] Elastic response spectra derive directly from accelerograms, while design spectra, standardized in codes like ASCE 7, smooth and scale empirical data to represent probabilistic exceedance risks (e.g., 2% in 50 years), incorporating site soil effects via amplification factors up to 2.5 for soft soils.[5] In practice, the response spectrum analysis method applies these spectra to MDOF structures by combining modal maxima via methods like the complete quadratic combination (CQC), which accounts for modal cross-correlations via $ \rho_{ij} = \frac{8\zeta^2 (1 + r)(r + r^3)}{(1 + r^2)^2 (1 - r^2 + 8\zeta^2 (1 + r)^2)} $ where $ r = \omega_i / \omega_j $, ensuring accurate estimation of base shear and overturning moments. This approach, validated against nonlinear time-history simulations, underpins modern seismic evaluation, revealing that higher modes contribute significantly in stiff structures (e.g., shear walls with $ T_1 < 0.5 $ s).[1]

Analysis and Performance Evaluation

Experimental Assessment Techniques

Experimental assessment techniques in earthquake engineering employ physical models, either scaled or full-scale, subjected to simulated seismic excitations to measure dynamic responses, validate theoretical models, and identify failure mechanisms. These methods complement analytical approaches by capturing nonlinear behaviors, material degradation, and soil-structure interactions that numerical simulations may overlook. Key techniques include shake table testing, pseudo-dynamic testing, quasi-static cyclic loading, and centrifuge modeling, each addressing specific aspects of seismic performance while contending with practical constraints like scaling laws and facility capacities.[39][40] Shake table testing dynamically excites structures by replicating recorded or synthetic ground motions on a translating platform, providing the most direct simulation of inertial forces. Facilities such as Japan's E-Defense feature the world's largest table, measuring 15 m by 20 m with a 1,200-ton payload capacity and accelerations up to 2g, enabling tests on multi-story buildings and soil-foundation systems.[41] Despite its fidelity to real earthquake dynamics, shake table testing is limited by scale effects, where reduced-size models distort mass, stiffness, and damping ratios, and by high energy demands for large specimens.[42][43] Pseudo-dynamic testing mitigates some shake table drawbacks by hybridizing computational and experimental elements: numerical models predict displacements from applied forces or measured accelerations, which actuators impose quasi-statically on the physical specimen, incorporating real-time feedback for accuracy. Developed in the late 1960s by Japanese researchers like Hakuno et al., this method reduces inertial loading needs and allows full-scale testing without dynamic scaling issues, though it assumes linear time-invariant properties and requires precise control systems.[44][45] Applications include evaluating base-isolated structures, where tests have correlated well with shake table results for displacement responses.[46] Quasi-static testing applies slow, reversed cyclic displacements or forces to isolated components or subassemblies, isolating hysteretic energy dissipation without dynamic effects, ideal for characterizing damping devices and connections under repeated loading. This approach reveals cumulative damage accumulation but neglects rate-dependent phenomena like strain-rate hardening in concrete or steel.[39] Centrifuge testing scales gravitational acceleration to maintain realistic stress states in geotechnical models, simulating soil liquefaction, retaining walls, and embedded foundations under earthquake shaking via integrated mini shake tables. Typical accelerations reach 50-100g on small rotors, enabling 1:50 to 1:100 scale factors while preserving prototype densities and pressures, though boundary effects and model fabrication precision pose challenges.[47] These tests have validated soil-structure interaction models by quantifying excess pore pressures and lateral spreading.[48]

Analytical and Numerical Modeling

Analytical modeling in earthquake engineering utilizes simplified mathematical frameworks to predict structural responses under seismic loading, often assuming linearity and regularity in geometry and mass distribution. These approaches derive closed-form or semi-analytical solutions from the equations of motion for single-degree-of-freedom (SDOF) or multi-degree-of-freedom (MDOF) systems, facilitating rapid assessment for preliminary design. The equivalent static method approximates dynamic effects by applying a lateral force $ V = C_s W $, where $ C_s $ is the seismic coefficient based on site-specific acceleration and structure period, and $ W $ is the seismic weight; this is prescribed in codes like ASCE 7 for buildings with periods under 3.5 seconds and low irregularity, as it conservatively envelopes peak responses without requiring dynamic properties.[49][50] Response spectrum analysis extends this by superposing modal contributions, where the maximum response for each mode is obtained from an elastic response spectrum—a plot of peak SDOF displacements, velocities, or accelerations versus natural period for a given damping ratio and ground motion suite. Formulated by Maurice Biot in 1932, it employs combination rules like Complete Quadratic Combination (CQC) to account for modal cross-correlations, yielding base shear and story forces accurate within 10-20% of time-history results for linear systems with up to 20 modes capturing 90% mass participation.[51][52] Limitations arise in highly nonlinear or torsionally irregular structures, where underestimation of higher-mode effects can occur without vertical spectrum components.[53] Numerical modeling addresses complexities beyond analytical tractability, such as material nonlinearity, geometric irregularities, and soil-structure interaction (SSI), through discretization and iterative solution of partial differential equations. The finite element method (FEM) partitions structures into elements connected at nodes, assembling global stiffness $ \mathbf{K} $, damping $ \mathbf{C} $, and mass $ \mathbf{M} $ matrices to solve $ \mathbf{M} \ddot{\mathbf{u}} + \mathbf{C} \dot{\mathbf{u}} + \mathbf{K} \mathbf{u} = -\mathbf{M} \mathbf{r} \ddot{u}_g(t) $, where $ \mathbf{r} $ is the influence vector for ground acceleration $ \ddot{u}_g(t) $; implicit schemes like Newmark-β ensure stability for time-history integration.[48] Nonlinear pushover analysis applies monotonically increasing lateral loads to trace capacity curves, estimating ductility demand via invariant load patterns, validated against cyclic tests for performance-based design in codes like FEMA P-695, though it neglects higher-mode cyclic degradation.[49] Advanced numerical techniques incorporate rate-dependent plasticity and contact algorithms for simulating uplift or pounding, with validation against centrifuge or shake-table data showing errors below 15% for SSI-inclusive models of mid-rise frames under moderate earthquakes (PGA 0.3-0.5g).[54] Open-source frameworks like OpenSees enable hybrid simulations coupling physical substructures with numerical models, reducing computational demands while capturing real hysteretic behavior, as demonstrated in 2023 benchmarks for reinforced concrete shear walls.[55] Despite efficiency gains from parallel computing, challenges persist in uncertainty quantification, with Monte Carlo sampling of ground motion ensembles required for probabilistic seismic hazard assessment to achieve reliability indices exceeding 2.5.[56]

Design Principles

Seismic Design Codes and Requirements

Seismic design codes establish minimum criteria for structures to withstand expected earthquake ground motions, emphasizing collapse prevention and life safety while allowing controlled damage. These codes integrate probabilistic seismic hazard assessments, site-specific soil effects, and structural system ductility to derive design forces.[57] In prescriptive approaches, equivalent lateral forces or response spectra are applied, scaled by response modification factors (R) that account for energy dissipation capacity, typically ranging from 1 for brittle systems to 8 for ductile steel frames.[30] In the United States, the International Building Code (IBC), with the 2021 edition as the current reference, adopts seismic provisions from ASCE/SEI 7-16, which defines Seismic Design Categories (SDCs) A through F based on short-period (S_DS) and 1-second (S_D1) design spectral accelerations, occupancy importance, and site class.[58][59] Hazard values derive from USGS maps targeting a uniform collapse risk, with maximum considered earthquake (MCE_R) ground motions at approximately a 1% probability of exceedance in 50 years (2475-year return period), adjusted via site coefficients (F_a and F_v) for soil amplification.[33] Design basis earthquake (DBE) levels, often around a 10% probability in 50 years (475-year return period), inform force demands, with structures required to remain operational or habitable post-event depending on SDC and importance factor (I_e from 1.0 to 1.5).[60] Analysis methods include equivalent lateral force procedures for regular structures and modal response spectrum or time-history analysis for irregular or tall buildings.[61] Eurocode 8 (EN 1998-1:2004), the European standard, specifies design for a reference earthquake with a 475-year return period (10% exceedance in 50 years), using peak ground acceleration (PGA) on rock and elastic response spectra shaped by soil category (A-E) and topographic effects.[62] Behavior factors (q, analogous to R) up to 6.75 for ductile systems reduce elastic demands, with national annexes calibrating ground motion parameters to local seismicity; for instance, high-hazard zones like parts of Italy require PGA up to 0.4g.[62] Other national codes, such as Canada's NBCC 2020, employ similar spectral acceleration values at periods of 0.2, 0.5, 1.0, and 2.0 seconds, derived from probabilistic models with uniform hazard spectra.[63] Performance-based seismic design (PBSD), permitted as an alternative in codes like IBC Section 104, shifts from uniform life-safety objectives to owner-defined targets, such as immediate occupancy for frequent events (43% exceedance in 50 years) or collapse prevention for rare MCEs, verified via nonlinear static or dynamic analyses per ASCE 41-17.[64][65] This approach quantifies losses using fragility functions and incremental dynamic analysis, enabling optimized designs for critical facilities, though it requires peer-reviewed validation and higher modeling fidelity to mitigate uncertainties in ground motion selection.[66] Codes evolve post-disasters; for example, 1994 Northridge and 1995 Kobe earthquakes prompted ductility enhancements and near-fault effects in ASCE 7 updates.[57] Global harmonization efforts, via organizations like the International Code Council, aim to align parameters while respecting regional tectonics and construction practices.[67]

Common Failure Modes

Soft-story collapse occurs when the stiffness and strength of the first story are significantly less than those of upper stories, often due to open ground floors for parking or commercial use, leading to excessive lateral deformation and potential pancaking of upper floors during seismic shaking. This failure mode was prominently observed in the 1994 Northridge earthquake, where wood-frame soft-story buildings experienced partial collapses from inadequate shear resistance at the ground level. Similar mechanisms contributed to collapses in the 1989 Loma Prieta earthquake, highlighting vulnerabilities in urban multi-story structures with tuck-under parking.[68] Pounding between adjacent buildings arises when insufficient separation gaps allow structures with differing dynamic characteristics to collide during earthquakes, causing local damage such as column shear failures or slab-edge crushing. In the 1985 Mexico City earthquake, pounding exacerbated structural failures in closely spaced mid-rise buildings, with impacts leading to brittle column fractures.[69] Evidence from the 1989 Loma Prieta event also documented pounding-induced corner damage and beam fractures in reinforced concrete frames.[70] Brittle connection failures in welded steel moment-resisting frames, particularly at beam-to-column welds, result from inadequate ductility and fracture initiation at heat-affected zones under cyclic loading. The 1994 Northridge earthquake revealed widespread fractures in pre-Northridge welded connections, affecting over 200 steel buildings with cracks propagating through weld metal and base metal, though no complete collapses occurred due to redundancy.[71] These failures stemmed from design assumptions underestimating strain demands and material toughness.[72] Unreinforced masonry (URM) buildings commonly fail in out-of-plane wall collapse or in-plane shear cracking due to lack of tensile reinforcement, resulting in sudden brittle failure under lateral forces. During earthquakes, URM walls separate from floors, leading to pancaking; this was a primary cause of casualties in events like the 2011 Christchurch earthquake, where many historic URM structures partially or fully collapsed.[73] Out-of-plane mechanisms dominate in low-rise URM, as seen in gable wall failures from insufficient anchorage.[74] Liquefaction-induced ground failures cause differential settlements, tilting, or bearing capacity loss, undermining building foundations on saturated cohesionless soils during intense shaking. The 1964 Niigata earthquake demonstrated this with apartment buildings tilting up to 60 degrees due to soil liquefaction beneath shallow foundations, resulting in permanent deformations without structural overload.[75] In the 2011 Christchurch earthquake, liquefaction contributed to the collapse of two multi-story reinforced concrete buildings through foundation settlements and lateral spreading.[76] Torsional failure in asymmetric buildings occurs when the center of mass does not align with the center of rigidity, inducing uneven drift and higher demands on perimeter elements, often leading to localized collapses. Observations from the 2008 Sichuan earthquake showed torsional effects amplifying damage in irregular reinforced concrete frames, with corner columns failing in shear.[77] This mode underscores the importance of symmetry in seismic design to distribute inertial forces evenly.[57]

Mitigation Techniques

Base Isolation Systems

Base isolation systems decouple a structure's superstructure from its foundation during seismic events, minimizing the transmission of ground accelerations to the building. This approach relies on inserting low-stiffness, energy-dissipating elements, such as bearings or pads, between the base and the ground, which permits relative horizontal displacement while increasing the system's natural period typically to 2-3 seconds. By shifting the response to a portion of the seismic response spectrum with lower accelerations, these systems can reduce base shear forces by 50-80%, depending on the design and ground motion characteristics.[78][79] The concept of base isolation dates back over 100 years, with modern implementations emerging in the mid-20th century, particularly in Japan and New Zealand during the 1960s and 1970s. Early systems evolved from rudimentary friction or rubber-based isolators to engineered solutions incorporating damping mechanisms, achieving maturity as a viable alternative to conventional seismic design by the 1990s. Worldwide applications expanded to include structures in the United States, China, Italy, and other seismically active regions, with performance validated in events like the 2011 Christchurch earthquakes where isolated buildings exhibited minimal damage compared to fixed-base counterparts.[80][78] Common types include elastomeric bearings, such as lead-rubber bearings (LRBs) composed of alternating rubber and steel layers with a central lead core for hysteretic damping, and sliding systems like friction pendulum bearings that utilize articulated surfaces to provide restoring forces via geometry. Other variants encompass pure friction sliders, high-damping rubber bearings, and spring-based isolators, often augmented by viscous dampers to control displacements and enhance energy dissipation. Selection depends on factors like soil conditions, structure height, and expected seismic demands, with LRBs widely used for their balance of stiffness, damping, and durability.[79][78] Design requires accommodating isolator displacements, often up to 300-500 mm, necessitating a perimeter moat or gap around the foundation and flexible utility connections. While effective for mid-rise buildings on firm soils, limitations include higher initial costs, sensitivity to long-period velocity pulses in near-fault motions, and reduced efficacy on soft soils where excessive settlements may occur. Empirical studies confirm that hybrid systems combining isolation with supplemental dampers further optimize performance, reducing residual drifts and repair times post-earthquake. Applications span hospitals, data centers, and nuclear facilities, as seen in the Loma Linda University Medical Campus, where isolation minimized operational disruptions during seismic tests.[78][79]

Energy Dissipation Devices

Energy dissipation devices are passive structural components engineered to absorb seismic energy through mechanisms such as friction, viscous shearing, or material yielding, thereby mitigating the transmission of ground motion forces to the primary load-bearing elements of buildings and bridges. These devices supplement inherent structural damping, which typically ranges from 2-5% of critical damping in reinforced concrete and steel frames, by introducing controlled energy loss that reduces peak inter-story drifts by up to 50% and accelerations by 30-40% in simulated earthquake tests.[81][82] Their deployment primarily targets displacement reduction, with secondary benefits in limiting base shear forces, as validated through shake table experiments and nonlinear time-history analyses.[83] Viscous dampers, often fluid-filled cylinders with piston-orifice systems, generate damping forces proportional to relative velocity raised to an exponent (typically 0.2-1.0), enabling velocity-dependent energy dissipation densities exceeding 10^6 J/m³ per cycle under seismic frequencies of 0.5-2 Hz. Introduced for civil applications in the late 1980s following aerospace precedents, these devices have been retrofitted in structures like the Park Plaza Building in Los Angeles, where they reduced drift demands during the 1994 Northridge earthquake simulations.[84][85] Experimental cyclic loading tests confirm their stability over thousands of cycles, with minimal degradation, though performance depends on fluid viscosity and temperature variations affecting orifice flow.[86] Metallic yielding dampers dissipate energy via hysteretic loops formed by low-cycle fatigue in ductile metals like mild steel or shape-memory alloys, achieving energy absorption capacities of 50-200 kJ per unit through shear, bending, or axial yielding mechanisms. Variants such as added damping and stiffness (ADAS) devices, featuring X-shaped steel slits, exhibit stable flag-shaped hysteresis and have been applied in Japanese high-rise buildings since the 1980s, with post-yield stiffness ratios around 5-10% enabling recentering.[87] Numerical studies on grooved metallic dampers demonstrate up to 60% reductions in story drifts for multi-story frames under El Centro ground motion records, though replaceability post-event is critical due to permanent deformation.[88] Friction dampers, utilizing high-strength sliding interfaces often with brass or composite pads preloaded to slip loads of 50-500 kN, produce near-ideal rectangular hysteresis loops independent of velocity, offering dissipation efficiencies comparable to viscous types without fluid maintenance. Deployed in retrofits since the 1990s, such as in New Zealand's Christchurch buildings, they limit residual drifts to under 0.5% by engaging only during moderate-to-severe shaking, preserving serviceability under wind or minor events.[89][90] Full-scale tests under protocols like CUREE loading show friction coefficients stable at 0.2-0.4 over displacement amplitudes of 50-300 mm, with optimal placement in braced frames yielding 40-70% peak response reductions in probabilistic seismic analyses.[91] Hybrid configurations combining these mechanisms, such as friction-viscous or yielding-friction dampers, further optimize performance by balancing stiffness, damping, and re-centering, as evidenced in bridge applications where they extend fatigue life under repeated seismic cycles.[92] Overall, these devices enhance seismic resilience when integrated per codes like ASCE 7-16, with design verified through capacity spectrum methods ensuring factor-of-safety margins against collapse.[93]

Advanced Control Methods

Advanced control methods in earthquake engineering encompass active, semi-active, and hybrid systems that dynamically adjust structural response using feedback from sensors and control algorithms, extending beyond passive techniques like base isolation or fixed dampers. These methods aim to minimize accelerations, drifts, and forces during seismic events by actively or adaptively counteracting vibrations, often achieving greater reductions in structural demands than passive systems alone. Active systems, in particular, introduce external energy via actuators to apply opposing forces, while semi-active systems modulate inherent properties such as damping without net energy input. Hybrid approaches combine elements of both for enhanced robustness.[94][95] Active control relies on real-time measurement of structural motion through accelerometers and displacement sensors, processed by algorithms like linear quadratic regulators (LQR) or sliding mode control to command hydraulic or piezoelectric actuators that generate counter-forces. Theoretical and experimental studies demonstrate that active systems can reduce peak interstory drifts by 40-60% and base shears by up to 50% in multi-degree-of-freedom structures under various earthquake inputs, outperforming passive controls in variable hazard scenarios. However, implementation faces challenges including dependency on uninterrupted power—failure of which could amplify responses—and sensitivity to modeling inaccuracies or control-structure interactions that may destabilize the system if not robustly designed. Full-scale applications remain limited due to high costs and reliability concerns, though laboratory shake-table tests on scaled buildings confirm feasibility for high-rise structures.[96][97][98] Semi-active control systems, such as those employing magnetorheological (MR) fluid dampers or variable-orifice viscous dampers, adjust damping coefficients in response to command signals derived from structural feedback, using minimal electrical power only for property modulation rather than force generation. These devices alter fluid viscosity or orifice size via electromagnetic fields or valves, enabling real-time adaptation to earthquake intensity without the risks of active energy injection. Research on MR dampers in base-isolated or braced frames shows reductions in displacement responses by 20-40% compared to passive counterparts, with clipped-optimal or fuzzy logic algorithms enhancing performance under broadband excitations like the 1995 Kobe earthquake record. Advantages include fail-safe operation—reverting to passive mode upon power loss—and lower energy needs, making them suitable for retrofitting existing buildings; field tests on structures like cable-stayed bridges validate their efficacy in mitigating higher-mode vibrations.[99][100][101] Hybrid control integrates active actuators with semi-active or passive elements, such as combining hydraulic braces with MR dampers, to leverage complementary strengths like active precision and semi-active reliability. Studies indicate hybrid setups can suppress roof accelerations by over 70% in tall buildings subjected to near-fault ground motions, with adaptive algorithms mitigating spillover effects into uncontrolled modes. Despite promising simulations, practical deployment requires addressing sensor noise, time delays in control loops (typically under 10 ms for stability), and regulatory validation, as evidenced by ongoing research into robust H-infinity controllers. These methods represent an evolution toward "smart" structures, though empirical data from real events remains sparse, emphasizing the need for further validation against diverse seismic datasets.[102][103]

Construction Practices

Reinforced Concrete and Steel Structures

In reinforced concrete construction for seismic zones, ductile detailing is essential to achieve energy dissipation through controlled yielding rather than brittle shear or compression failures. This involves providing closely spaced transverse reinforcement, such as hoops and crossties, in potential plastic hinge regions of columns and beam-column joints to confine concrete and prevent buckling of longitudinal bars.[104] Beams are detailed to be under-reinforced, ensuring tensile yielding precedes concrete crushing, with minimum shear reinforcement ratios increased to twice those in non-seismic designs.[105] Standards like ACI 318 Chapter 18 mandate special provisions for special moment frames, including development lengths at least 1.25 times those for non-seismic cases and lap splice restrictions in high-strain zones.[106] Shear walls, often integrated as coupled or uncoupled systems, are constructed with boundary elements reinforced for ductility, using aspect ratios limited to 2.5 or less to promote flexural behavior over shear.[30] Steel structures in earthquake-prone areas prioritize systems like special moment frames (SMFs), where rigid beam-to-column connections are designed to yield in beams while columns remain elastic, achieving rotation capacities of at least 3% radians through qualifying cyclic tests.[107] Construction practices include prequalification of welded connections, such as reduced beam sections (RBS) that narrow the beam flange to localize yielding, or bolted extended end-plate connections for field assembly reliability.[108] Concentrically braced frames incorporate ductile braces with core gusset plates allowing 2-4% axial strain before fracture, supplemented by buckling-restrained braces in advanced designs to equalize tension and compression capacities.[109] AISC 341 specifies protected zones free of attachments and demands continuity plates in panel zones to prevent distortion under double-curvature bending.[110] Quality assurance emphasizes ultrasonic testing of welds and material properties with minimum yield strengths of 50 ksi for beams in SMFs.[111] Both materials require foundation practices like deep piles or mat foundations in soft soils to mitigate liquefaction-induced settlement, with RC footings reinforced for dowel action and steel base plates grouted for fixity.[112] Empirical data from events like the 1994 Northridge earthquake revealed vulnerabilities in pre-1990s welded steel moment connections, prompting post-1997 AISC updates for fracture toughness, while RC failures underscored the need for continuous bottom reinforcement in beams to avoid splice-induced weaknesses.[113] In high-seismic zones (e.g., USGS Zone D or higher), hybrid systems combining RC cores with steel perimeter frames leverage concrete's mass damping and steel's repairability.[114] Construction sequencing prioritizes symmetric load paths to avoid torsional irregularities, with tolerances for member straightness limited to L/1000 in steel erection.

Masonry, Timber, and Light-Frame Structures

Unreinforced masonry (URM) structures, typically constructed from brick, stone, or concrete blocks without embedded steel reinforcement, exhibit brittle behavior under seismic loading due to their low tensile strength and lack of ductility.[115] These buildings are susceptible to out-of-plane wall failures, diagonal shear cracking, and complete collapse when subjected to moderate to severe ground shaking, as observed in historical events where URM accounted for significant casualties and damage.[116] In the 2010 Canterbury earthquake sequence, URM buildings experienced severe damage leading to collapses, highlighting their vulnerability to in-plane and out-of-plane demands.[115] Seismic design for new masonry incorporates reinforced elements, such as vertical and horizontal reinforcement bars grouted into cells, to enhance tensile capacity and confinement, per standards like those in ASCE 7.[30] Retrofitting URM buildings focuses on improving shear strength, ductility, and anchorage to prevent partial or total failure. Common techniques include concrete jacketing of piers and spandrels, which encases masonry in reinforced concrete to increase confinement and energy dissipation, and the addition of steel or fiber-reinforced polymer (FRP) overlays for targeted strengthening.[117] Shotcrete overlays provide a cost-effective alternative, applying pneumatically projected concrete reinforced with mesh to walls, though effectiveness depends on proper bonding and thickness, typically 3-6 inches.[117] Post-tensioning vertical rods anchored to roof and foundation slabs induces compressive forces to mitigate tensile cracking.[118] In regions like California, mandatory retrofit ordinances for URM have reduced collapse risk, with studies showing up to 70% improvement in seismic capacity after implementation.[73] Timber structures, including heavy-timber frames and post-and-beam systems, benefit from wood's inherent ductility and lightweight nature, which reduce seismic inertial forces compared to masonry or concrete.[119] However, vulnerability arises at connections, where nailed or bolted joints can fail under cyclic loading, leading to excessive deformation or disassembly.[120] Design principles emphasize ductile moment-resisting frames or shear walls sheathed with plywood or oriented strand board (OSB) to provide lateral resistance, with hold-down anchors at wall ends to counteract uplift.[30] In multi-story applications, cross-laminated timber (CLT) panels serve as diaphragms and walls, dissipating energy through panel rocking and friction, as demonstrated in shake-table tests simulating magnitudes up to 7.5 with minimal residual damage.[119] Light-frame wood structures, prevalent in single- and low-rise residential construction, rely on repetitive stud framing with sheathing for shear resistance.[121] These systems perform well in earthquakes when properly braced, as their redundancy and flexibility allow deformation without collapse, evidenced by low structural failure rates in events like the 1994 Northridge earthquake, where most damage was non-structural.[122] Seismic codes, such as International Building Code (IBC) Section 2308, impose height and bracing restrictions in high-seismic zones (Categories D and E), limiting conventional light-frame to one story without special systems and requiring continuous ties from foundation to roof.[123] Soft-story configurations, common in retrofitted homes with garages below, amplify demands; mitigation involves steel moment frames or braced frames at the base to equalize drift.[124] Historical timber-laced masonry hybrids, as in the 2001 Gujarat earthquake, showed superior performance over pure URM due to timber's confining effect, informing hybrid retrofit strategies.[125]

Innovative and Traditional Adaptations

Traditional adaptations in earthquake engineering construction emphasize empirical techniques derived from local materials and observed seismic resilience, predating modern codes. Ancient structures like the Tomb of Cyrus the Great, constructed around 550 BC in Pasargadae, Iran, employed a stepped pyramidal form with a gabled stone roof supported by slender columns, enabling load distribution and flexibility that allowed it to withstand regional earthquakes for over 2,500 years.[126] Similarly, Inca architecture in Peru utilized ashlar masonry—precisely cut andesite stones fitted without mortar—creating interlocking blocks that permit minor relative movement during shaking while maintaining integrity; the Intihuatana stone at Machu Picchu, built circa 1450, exemplifies trapezoidal shaping and inward-leaning walls that counter overturning forces.[127] These methods relied on dry stone construction and geometric stability rather than rigid bonding, reducing brittle failure risks in areas lacking iron tools or cement.[13] In East Asia, traditional Japanese pagoda construction featured multi-tiered wooden frames with interlocking dougong brackets and central masts, allowing lateral sway and energy dissipation without nails; the five-story pagoda of Hōryū-ji Temple, erected in 711 AD, has survived multiple major earthquakes due to this flexible assembly that avoids stress concentrations.[128] Vernacular practices in regions like the Himalayas incorporated bamboo or timber lacing in masonry walls for ductility, with low aspect ratios and lightweight roofs to minimize inertial forces, as documented in post-event analyses of structures enduring quakes up to magnitude 8.[129] These adaptations highlight first-hand causal understanding of ground motion, prioritizing dissipation over resistance through material flexibility and form. Innovative adaptations build on these principles using advanced materials and computational design for enhanced performance in contemporary construction. Cross-laminated timber (CLT) panels, developed since the 1990s and increasingly adopted post-2010, provide prefabricated, lightweight structural elements with high strength-to-weight ratios and inherent ductility, enabling taller sustainable buildings in seismic zones; for example, a 2024 analysis notes CLT's capacity to absorb energy via panel shear without excessive deformation in simulated M7 events.[130] Shape memory alloys (SMAs), integrated into braces or reinforcements since the early 2000s, exhibit superelasticity to recenter structures after yielding, minimizing residual drifts; laboratory tests as of 2023 demonstrate SMAs reducing inter-story drifts by up to 50% compared to steel equivalents in shake-table simulations.[131] Controlled rocking systems represent another recent evolution, where foundations or cores are engineered to uplift intentionally during strong motion, followed by self-centering via post-tensioned tendons, limiting damage to replaceable fuses; pioneered in the 1970s but refined in the 2010s, this approach has been validated in full-scale tests showing drift capacities exceeding 5% without collapse.[132] These innovations, informed by finite element modeling and real-time sensor data, extend traditional flexibility concepts to high-rise and retrofit applications, achieving verifiable reductions in seismic demands through hybrid material behaviors.[133]

Risk Assessment and Prediction

Probabilistic Seismic Hazard Analysis

Probabilistic seismic hazard analysis (PSHA) is a methodology for estimating the likelihood and severity of earthquake-induced ground shaking at a specific site over a defined time period by integrating uncertainties from all potential seismic sources.[134] It provides engineers with probabilistic measures, such as the spectral acceleration expected to be exceeded with a 2% probability in 50 years (equivalent to a 2,475-year return period), which inform building code requirements for seismic design.[135] Unlike deterministic approaches that focus on maximum credible earthquakes from specific faults, PSHA aggregates contributions from distributed seismicity, faults, and subduction zones using the total probability theorem to produce hazard curves plotting annual exceedance rates against ground motion intensities.[136] The framework originated with C. Allin Cornell's 1968 paper, which formalized PSHA as a tool to rationally combine geological, seismological, and geophysical data amid inherent uncertainties, moving beyond earlier empirical correlations toward a formalized probabilistic integration.[137] Early applications emerged in the 1970s for nuclear facilities and were adopted by the U.S. Geological Survey (USGS) for national hazard maps starting in 1987, evolving through iterative updates incorporating refined fault models and ground motion prediction equations (GMPEs).[138] The 2023 USGS National Seismic Hazard Model (NSHM), for instance, updated source characterizations using finite fault ruptures and multi-fault systems, reflecting post-2010s empirical data from events like the 2019 Ridgecrest sequence.[139] Core steps in PSHA begin with delineating seismic sources, such as characterized faults with recurrence intervals derived from paleoseismic trenching (e.g., slip rates of 1-10 mm/year on San Andreas segments) or areal zones following Gutenberg-Richter b-value distributions (typically b ≈ 1.0 for magnitude-frequency relations, λ(M) = 10^{a - bM}).[136] Earthquake magnitudes are sampled from truncated exponential or characteristic models, distances from site-to-source geometries, and ground motions via GMPEs like NGA-West2 suite, which empirically relate intensity measures (e.g., peak ground acceleration >0.2g in high-hazard zones) to magnitude, distance, and site conditions (Vs30 >760 m/s for firm rock).[138] Uncertainties are propagated via Monte Carlo simulations or logic trees, weighting epistemic branches (e.g., 20-40% aleatory variability in GMPE sigma) to compute the mean hazard: λ(IM > im) = ∑ ∫∫ λ(M,r) ⋅ f(IM|M,r) ⋅ f(M) ⋅ f(r|M) dM dr, where λ is the rate, f are densities, and integration spans sources.[140] In earthquake engineering, PSHA outputs underpin response spectra for structural analysis, as in ASCE 7-22 standards adopting USGS maps to set site-specific risk-targeted ground motions, ensuring uniform collapse risk across regions (e.g., 1% annual probability targets adjusted for nonlinearity).[33] Deaggregation identifies dominant scenario contributions (e.g., 70% from M6.5-7.0 events at 10-50 km in California basins), guiding targeted retrofits like base isolation for hospitals.[141] Globally, PSHA informs Eurocode 8 and similar codes, with site amplification factors (e.g., +50% for soft soils) layered atop rock hazard via proxies like shear-wave velocity.[142] Despite its dominance, PSHA faces critiques for assuming ergodicity—equating long-term site averages to ensemble probabilities across rare events—which overlooks temporal clustering and fault interactions observed in physics-based models, potentially inflating hazards by averaging improbable distant large events with local small ones.[137] Empirical validations show overprediction in stable intraplate regions (e.g., central U.S. maps exceeding observed peaks by factors of 2-3 post-2008), attributed to unmodeled correlations or optimistic GMPE extrapolations beyond N=10-100 datasets.[143] Proponents counter that logic-tree branching captures epistemic uncertainty conservatively, as validated against global catalogs, though alternatives like neo-deterministic methods emphasize physics-driven scenario testing for critical infrastructure.[144] Ongoing refinements, such as hybrid PSHA incorporating finite-source simulations, aim to mitigate these by prioritizing causal rupture physics over pure statistical aggregation.[142]

Earthquake Loss Estimation Models

Earthquake loss estimation models are computational frameworks designed to quantify potential damages, casualties, and economic impacts from seismic events, aiding in risk mitigation, emergency planning, and policy decisions. These models integrate seismic hazard data, such as ground shaking intensity, with exposure inventories like building stocks and population distributions, applying vulnerability functions to predict outcomes.[145] They typically employ empirical or analytical approaches, including fragility curves that relate shaking levels to damage probabilities for structural classes, and capacity spectrum methods to assess nonlinear response.[146] Such models have evolved since the 1990s to incorporate geographic information systems (GIS) for spatial analysis, enabling scenario-based simulations for specific regions.[147] A prominent example is the HAZUS-MH Earthquake Model, developed by the Federal Emergency Management Agency (FEMA) and the National Institute of Building Sciences (NIBS), which provides standardized estimates of direct physical damage to buildings and infrastructure, indirect economic losses, and social impacts like shelter needs.[145] Released in versions up to 6.1 as of July 2024, HAZUS uses default national inventories aggregated at census block levels but allows user-defined refinements for accuracy.[145] Its methodology sequences ground motion propagation, structural response analysis via equivalent static or dynamic procedures, and loss aggregation, with outputs including repair costs calibrated against historical events like the 1994 Northridge earthquake.[148] Limitations include reliance on generalized vulnerability data, which may overestimate or underestimate losses in non-U.S. contexts without localization.[149] For rapid post-event assessment, the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system automates fatality and economic loss estimates within 30 minutes of an earthquake's occurrence, drawing on global seismic networks and country-specific exposure models.[150] PAGER maps modified Mercalli intensity (MMI) grids against population densities, applying empirical loss ratios derived from over 200 historical earthquakes, such as scaling capital stock losses by shaking exposure.[151] It issues color-coded alerts—green (low impact) to red (high)—to guide international response, with economic estimates reflecting GDP proxies and fatality models incorporating vulnerability factors like time of day.[152] Validation against events like the 2011 Tohoku earthquake shows reasonable accuracy for magnitudes above 5.5, though uncertainties arise from aftershock inclusion or unmodeled hazards like tsunamis.[153] Advanced methodologies extend beyond aggregate models to building-specific assessments, using nonlinear time-history analyses or probabilistic frameworks to prioritize retrofits via expected annual losses.[154] For instance, empirical approaches regress observed damages from events like the 2008 Sichuan earthquake against intensity measures, informing hybrid models that blend simulation with post-event data for iterative refinement.[153] These tools underscore causal linkages between shaking amplitude, material ductility, and failure modes, but require high-quality inventories to mitigate biases from outdated census data or idealized fragility assumptions.[155] Overall, while effective for scenario planning, models' predictive fidelity depends on empirical calibration and computational scalability, with ongoing updates addressing epistemic uncertainties through ensemble simulations.[156]

Economic Considerations

Benefit-Cost Analysis Frameworks

Benefit-cost analysis (BCA) frameworks in earthquake engineering evaluate the economic efficiency of seismic mitigation measures by quantifying the present value of avoided losses against implementation costs, often yielding benefit-cost ratios (BCRs) exceeding 1 to justify investments.[157] These frameworks typically incorporate probabilistic seismic hazard assessments, fragility curves for structural damage, and loss estimation models to project future earthquake impacts over a structure's lifecycle, discounting future benefits at rates such as 3-7% annually to reflect time value of money.[158] In performance-based earthquake engineering (PBEE), BCA extends to multi-hazard scenarios, comparing retrofit options like base isolation or damping systems against baseline vulnerabilities, with decisions informed by exceedance probabilities rather than deterministic events.[159] FEMA's BCA methodology, mandated for hazard mitigation grants, applies to seismic retrofits by modeling expected damages from historical or probabilistic events, subtracting post-mitigation losses to derive benefits, and requiring BCRs greater than 1 for funding eligibility.[160] For instance, the agency's toolkit uses software to integrate site-specific ground motion data with building fragility functions, estimating retrofit costs alongside benefits from reduced repair, downtime, and casualty costs, though it has been critiqued for underemphasizing long-tail risks in low-probability, high-impact quakes.[161] Probabilistic extensions, as in Colombia's vulnerability reduction program, employ Monte Carlo simulations to generate loss exceedance curves, revealing that hospital retrofits yielded BCRs of 2.5-4 over 50 years by averting operational disruptions valued at millions per event.[162] City-scale applications adapt these frameworks to portfolios, factoring in retrofit sequencing and indirect benefits like preserved infrastructure functionality; a 2023 study of urban seismic retrofitting proposed optimizing interventions where BCRs surpassed 3 for unreinforced masonry in high-hazard zones, prioritizing based on annualized loss reductions.[158] Challenges persist in valuing intangible benefits, such as lives saved—often monetized via value-of-statistical-life estimates around $7-10 million per averted fatality—and addressing epistemic uncertainties in hazard models, which can inflate or deflate BCRs by 20-50% depending on input conservatism.[157] Empirical validations, like NIST reviews, confirm that while deterministic BCAs suffice for frequent events, probabilistic variants better capture rare disasters, with aggregated U.S. analyses showing seismic mitigations returning $13 in benefits per $1 spent across building types.[163]

Retrofit and Policy Implementation Challenges

Retrofitting existing structures for seismic resilience presents significant technical hurdles, particularly for older buildings constructed before modern codes, which often lack adequate ductility or lateral load resistance. These structures may require invasive interventions such as adding shear walls, base isolators, or dampers, but integrating these with existing foundations can be constrained by site-specific soil conditions and architectural features, leading to unforeseen structural incompatibilities.[164][165] For instance, unreinforced masonry or soft-story configurations common in pre-1970s urban buildings demand customized solutions, as standardized approaches fail to account for variability in material degradation or hidden defects exposed only during invasive assessments.[166] Economic barriers exacerbate these issues, with retrofit costs frequently ranging from 10-30% of a building's replacement value, deterring owners due to long payback periods exceeding 50 years in low-seismic zones despite potential life-safety benefits.[167] Benefit-cost analyses indicate that while retrofits can yield returns through reduced downtime and repair expenses—evidenced by post-event data showing unretrofitted structures incurring losses up to 34% of value versus 7-25% for retrofitted ones—upfront financing remains elusive without subsidies, as private owners prioritize immediate cash flows over probabilistic hazard mitigation.[168][169] This reluctance is compounded by disruption during implementation, such as tenant relocation or operational halts, which can amplify indirect costs in densely populated areas.[170] Policy implementation faces institutional and behavioral obstacles, including fragmented governance where local, state, and federal jurisdictions clash over funding and enforcement, resulting in uneven adoption rates.[171] Inadequate incentives, such as tax credits or grants covering less than 20% of costs in many programs, fail to overcome owner inertia, while mandatory retrofit ordinances often encounter legal pushback or evasion through grandfathering clauses.[167] For example, policies setting minimum retrofit standards, as in New Zealand, can inadvertently discourage comprehensive upgrades by imposing compliance burdens without scaling incentives to performance levels, leading to suboptimal risk reduction.[172] Enforcement gaps persist due to limited inspection resources and political resistance to stringent mandates, as seen in regions where post-disaster audits reveal implementation lags behind engineering advancements, with retrofit uptake below 10% for vulnerable private inventories despite known vulnerabilities.[173][164] Addressing these requires evidence-based policy redesign prioritizing verifiable seismic performance metrics over vague resilience goals, though systemic underinvestment in monitoring continues to hinder progress.[174]

Controversies and Debates

Myths, Fallacies, and Design Misconceptions

A persistent misconception in earthquake engineering holds that buildings constructed to modern seismic codes are essentially earthquake-proof, preventing significant damage beyond collapse avoidance. In reality, such codes primarily ensure life safety by limiting collapse risk, but they permit repairable damage or even functional impairment during design-level events; for instance, during the 1994 Northridge earthquake (magnitude 6.7), numerous structures compliant with contemporary California codes at the time suffered substantial structural damage, including the Kaiser Permanente Building, highlighting that code compliance does not equate to minimal disruption or rapid recovery.[175][176] Another fallacy involves the overemphasis on maximizing energy absorption in structural elements to optimize seismic performance, assuming ideal hysteretic behavior dissipates input energy effectively without residual issues. Priestley critiques this as a myth, noting that real earthquake response involves cumulative damage, P-delta effects, and residual displacements, where alternative hysteresis shapes may yield better outcomes than idealized elastic-perfectly-plastic loops promoted in some design practices.[177] Relatedly, the reliance on elastic spectral analysis as the foundational method for seismic design ignores the nonlinear inelastic behavior dominant in strong ground motions, leading to discrepancies between predicted and actual displacements, as equal displacement or energy rules vary inconsistently across ground motion durations and intensities.[177] Design misconceptions also extend to common errors in applying seismic provisions, such as neglecting continuous load paths that ensure force transfer through the structure, which can result in unintended weak links during shaking. Engineers sometimes misapply response modification factors (R), underestimating the need for enhanced detailing in ductile systems, or overlook overstrength amplification (Ω₀) for elements like anchor bolts, potentially creating brittle failure modes despite overall ductility intentions.[176] Additionally, the belief that advanced three-dimensional modal analysis inherently provides superior accuracy is fallacious, as it still depends on flawed assumptions like the equal-displacement rule and may overestimate stiffness or drift demands in irregular structures.[177] The notion that seismic enhancements impose prohibitively high costs is unfounded; studies indicate that achieving enhanced performance ratings adds only 1-10% to initial construction expenses, often offset by reduced retrofit needs or downtime losses, comparable to standard contingency allowances.[175] In value engineering, trimming seismic features to cut upfront costs can compromise long-term resilience, as seen in cases where post-earthquake demolition becomes economical due to irreparable damage.[175] These fallacies underscore the gap between theoretical design ideals and empirical performance, emphasizing displacement-based approaches over force-based ones for aligning with observed failure mechanisms.[177]

Ethical Dilemmas in Risk Management

In earthquake engineering, ethical dilemmas in risk management arise primarily from the inherent uncertainties in seismic hazards and the need to allocate limited resources amid competing societal priorities. Engineers and policymakers must weigh the probabilistic nature of earthquakes—where events follow power-law distributions with fat tails, making rare large quakes disproportionately impactful—against practical constraints like construction costs and economic development.[178] This often involves accepting non-zero failure probabilities, as designing for the maximum conceivable event (e.g., an M9+ quake in regions with historical M8 limits) would render most structures uneconomical, potentially exacerbating poverty by stifling housing supply in seismically active developing areas. The Earthquake Engineering Research Institute (EERI), in its 1996–2001 ethics project, highlighted these tensions through anonymized case studies, emphasizing that ethical practice requires explicit consideration of trade-offs rather than implicit assumptions of zero risk.[179] A core dilemma concerns benefit-cost analyses for mitigation measures, where higher safety standards yield diminishing returns. For instance, seismic retrofitting or exceeding base code requirements can achieve societal savings of approximately $4 for every additional $1 invested, but this varies by occupancy class—hospitals and schools justify greater expenditures due to concentrated human occupancy and irreplaceable functions, while residential buildings in low-density areas may not, as marginal cost increases outpace risk reductions.[157] Empirical data from events like the 1994 Northridge earthquake (M6.7, $20–40 billion in damages) underscore that current codes prioritize life safety over collapse prevention for rare events (e.g., 2% probability in 50 years), accepting some economic losses to avoid over-design that could bankrupt owners or delay critical infrastructure.[168] Critics argue this embeds a utilitarian calculus valuing statistical lives, raising equity issues: affluent regions retrofit more readily, leaving vulnerable populations in substandard housing exposed to disproportionate risks.[180] Communication of seismic risks presents another ethical challenge, as probabilistic assessments (e.g., via probabilistic seismic hazard analysis) are often misinterpreted by non-experts, leading to either complacency or undue alarm. The 2009 L'Aquila earthquake (M6.3, 309 fatalities) exemplifies this: on March 31, 2009, Italy's Major Risks Committee downplayed a swarm of foreshocks as not indicative of a major event, based on scientific consensus against reliable short-term prediction; yet a 2012 court convicted six scientists and an official of manslaughter for failing to convey residual risks clearly, imposing six-year sentences (later reduced and overturned on appeal in November 2014).[181] This case illustrates the dilemma of transparency versus public panic—overstating uncertainties can erode trust in expertise, while understating them risks liability, as causal realism demands acknowledging that no model perfectly captures aleatory variability in fault ruptures. EERI case studies stress evaluating alternatives through stakeholder perspectives, including long-term societal costs of false alarms, which could desensitize populations to genuine threats.[179] Professional liability further complicates decisions, pitting individual engineers' duty to public safety against client pressures or regulatory ambiguities. In jurisdictions with lax enforcement, certifying marginally compliant structures may enable affordable housing but foreseeably endanger occupants during events exceeding design levels (e.g., the 475-year return period in many codes).[182] Ethical frameworks from bodies like EERI advocate recognizing moral issues early, consulting peers, and documenting rationales, yet systemic incentives—such as liability caps or insurance models that externalize risks—often favor minimal compliance.[178] Ultimately, these dilemmas demand first-principles scrutiny of causal chains, from ground motion attenuation to structural response, ensuring decisions prioritize verifiable reductions in expected fatalities over politically motivated overreach.[179]

Recent Advances

Machine Learning and Simulation Tools

Machine learning techniques have increasingly supplemented traditional simulation methods in earthquake engineering by enabling faster approximations of complex seismic responses, particularly through surrogate modeling that reduces computational demands of finite element analyses. Neural networks, such as artificial neural networks (ANNs) and convolutional neural networks (CNNs), have demonstrated superior performance in predicting seismic capacity of structural components by learning from historical simulation data and experimental results. For instance, gradient boosting methods like XGBoost have outperformed other algorithms in fragility curve generation for buildings under seismic loading, allowing engineers to simulate rare events without exhaustive probabilistic runs.[183] In structural response prediction, deep learning models trained on shake table data and nonlinear time-history analyses provide real-time seismic demand estimates, bypassing the high fidelity but time-intensive nature of physics-based simulations. A 2025 study highlighted physics-informed neural networks (PINNs) that embed governing equations of motion into ML architectures, achieving up to 1000-fold speedups in simulating multi-degree-of-freedom systems while maintaining accuracy within 5% of traditional solvers for moderate earthquakes. These approaches are particularly valuable for high-rise or irregular structures where nonlinear soil-structure interaction complicates conventional tools like OpenSees or ETABS. Reinforcement learning variants have also emerged for optimizing damper placements in simulations, iteratively refining designs based on simulated damage metrics from events like the 1995 Kobe earthquake dataset.[184][185] Simulation platforms integrating ML, such as the NHERI SimCenter, facilitate hybrid workflows where ML accelerates regional ground motion modeling by interpolating between sparse sensor data and full-waveform simulations. ML-enhanced ground motion prediction equations (GMPEs) incorporate site-specific features via random forests or support vector machines, improving intensity measure forecasts for performance-based design by 20-30% over empirical models in regions with limited recordings. However, these tools require large, validated datasets to mitigate overfitting, with peer-reviewed benchmarks emphasizing hybrid ML-physics models to ensure causal fidelity in extrapolating beyond training earthquakes. Ongoing challenges include interpretability, as black-box ML predictions demand validation against first-principles mechanics to avoid unphysical artifacts in safety-critical applications.[186][187][188]

Lessons from 2023 Turkey-Syria Earthquake

The 2023 Kahramanmaraş earthquake sequence, initiated by a magnitude 7.8 rupture on February 6, 2023, along the East Anatolian Fault, exposed systemic vulnerabilities in reinforced concrete (RC) building stock across southeastern Turkey and northern Syria, where over 50,000 fatalities occurred, predominantly from structural collapses.[189] [190] Post-event analyses revealed that while Turkey had adopted and updated seismic design codes following the 1999 İzmit earthquake—incorporating ductile detailing and capacity design principles—enforcement remained inconsistent, particularly in provinces like Hatay and Kahramanmaraş, where informal construction and amnesty programs for illegal additions proliferated.[191] [192] These lapses amplified damage, as evidenced by the disproportionate failure of mid-rise RC frames built in the 1990s–2010s, which often deviated from code-mandated lateral force resistance through soft first stories or unreinforced infill interactions.[193] [194] A primary lesson underscores the causal link between non-ductile detailing and catastrophic failure modes: numerous collapses stemmed from shear failures in columns and coupling beams due to inadequate transverse reinforcement spacing exceeding code limits (e.g., stirrups spaced beyond 100–150 mm in critical zones), leading to brittle axial-shear interactions under cyclic loading.[195] [193] Short-column effects, induced by infill walls or architectural setbacks, triggered premature yielding and torsional irregularities, as observed in Hatay province buildings where ground motions amplified demands by factors of 1.5–2.0 times peak ground acceleration (PGA) values up to 0.8g.[194] [196] Empirical data from over 400 collapsed RC structures indicate that strong-beam–weak-column hierarchies violated capacity design, resulting in story mechanisms rather than distributed ductility, a pattern mitigated in compliant buildings that sustained only moderate cracking.[197] [198] Material and construction quality deficits further exacerbated outcomes, with reconnaissance revealing low-strength concrete (compressive strengths below 20 MPa in failed elements) and corroded or undersized rebar, often attributable to poor on-site practices rather than inherent code flaws.[199] [191] In Syria's affected regions, pre-existing conflict eroded oversight, compounding issues like foundation failures on soft soils prone to liquefaction, though ground motions rarely exceeded design levels for modern codes (e.g., PGA < 0.4g in many urban centers).[189] [200] Lessons emphasize proactive retrofitting of identified high-risk inventories via techniques like steel jacketing or fiber-reinforced polymer wrapping, prioritizing vulnerability rankings over blanket demolitions, as not all non-compliant structures collapsed uniformly.[198] [192] Policy implications highlight the necessity of rigorous permitting, independent inspections, and disincentives for substandard practices, as post-1999 code updates proved effective in low-damage zones with strict adherence, yet amnesty laws post-2010 enabled unvetted expansions that increased vulnerability.[191] [201] Rapid recovery efforts risk perpetuating cycles if reconstruction bypasses seismic audits, underscoring causal realism in linking lax governance to amplified losses over geophysical inevitability alone.[192] [202] International reconnaissance, including EERI-GEER teams, advocates integrating real-time ground motion data into risk models to refine hazard maps, revealing that the event's supershear rupture propagated unusually far (over 300 km), informing future probabilistic assessments.[189] [202]

References

User Avatar
No comments yet.