Recent from talks
Contribute something
Nothing was collected or created yet.
Failover
View on Wikipedia
Failover is switching to a redundant or standby computer server, system, hardware component or network upon the failure or abnormal termination of the previously active application,[1] server, system, hardware component, or network in a computer network. Failover and switchover are essentially the same operation, except that failover is automatic and usually operates without warning, while switchover requires human intervention.
History
[edit]The term "failover", although probably in use by engineers much earlier, can be found in a 1962 declassified NASA report.[2] The term "switchover" can be found in the 1950s[3] when describing '"Hot" and "Cold" Standby Systems', with the current meaning of immediate switchover to a running system (hot) and delayed switchover to a system that needs starting (cold). A conference proceedings from 1957 describes computer systems with both Emergency Switchover (i.e. failover) and Scheduled Failover (for maintenance).[4]
Failover
[edit]Systems designers usually provide failover capability in servers, systems or networks requiring high availability and a high degree of reliability.
At the server level, failover automation usually uses a heartbeat system that connects two servers, either through using a separate cable (for example, RS-232 serial ports/cable) or a network connection. In the most common design, as long as a regular "pulse" or heartbeat continues between the main server and the second server, the second server will not bring its systems online; however a few systems actively use all servers and can failover their work to remaining servers after a failure. There may also be a third "spare parts" server that has running spare components for "hot" switching to prevent downtime. The second server takes over the work of the first as soon as it detects an alteration in the heartbeat of the first machine. Some systems have the ability to send a notification of failover.
Certain systems, intentionally, do not failover entirely automatically, but require human intervention. This "automated with manual approval" configuration runs automatically once a human has approved the failover.
Failback
[edit]Failback is the process of restoring a system, component, or service previously in a state of failure back to its original, working state, and having the standby system go from functioning back to standby.
Usage
[edit]The use of virtualization software has allowed failover practices to become less reliant on physical hardware through the process referred to as migration in which a running virtual machine is moved from one physical host to another, with little or no disruption in service.
Failover and failback technology are also regularly used in the Microsoft SQL Server database, in which SQL Server Failover Cluster Instance (FCI) is installed/configured on top of the Windows Server failover Cluster (WSFC). The SQL Server groups and resources running on WSFC can manually be failover to the second node[5] for any planned maintenance on the first node or automatically failover to the second node in case of any issues on the first node. In the same way, a failback operation can be performed to the first node once the issue is resolved or maintenance is done on it.
See also
[edit]References
[edit]- ^
For application-level failover, see for example Jayaswal, Kailash (2005). "27". Administering Data Centers: Servers, Storage, And Voice Over IP. Wiley-India. p. 364. ISBN 978-81-265-0688-0. Retrieved 2009-08-07.
Although it is impossible to prevent some data loss during an application failover, certain steps can [...] minimize it.
. - ^ NASA Postlaunch Memorandum Report for Mercury-Atlas, June 15, 1962.
- ^ Petroleum Engineer for Management - Volume 31 - Page D-40
- ^ Proceedings of the Western Joint Computer Conference, Macmillan 1957
- ^ https://www.dbsection.com/how-to-failover-cluster-from-one-node-to-another/
Failover
View on GrokipediaCore Concepts
Definition and Purpose
Failover is the process of automatically or manually switching operations from a primary system or component to a redundant backup system upon detection of a failure, thereby maintaining continuous service delivery and minimizing disruptions.[7][8] This mechanism ensures that critical workloads, such as databases, applications, or network services, transfer seamlessly to the standby without significant data loss or user impact.[9] Early commercial implementations of failover appeared in the 1970s within mainframe and minicomputer environments, where companies like Tandem Computers introduced fault-tolerant systems with redundancy to handle hardware failures in mission-critical applications like banking and telecommunications.[10][11] The primary purpose of failover is to enable high availability (HA) in computing systems, targeting uptime levels such as 99.99% or higher, which translates to no more than about 52 minutes of annual downtime.[3][12] By rapidly redirecting traffic or processing to backups, failover reduces service interruptions to near zero, supporting business continuity in environments where even brief outages can lead to substantial losses.[13] Key benefits include enhanced system reliability through proactive failure handling, preservation of data integrity by synchronizing primary and backup states, and significant cost savings; for instance, a 2024 ITIC report indicates that over 90% of enterprises face hourly downtime costs exceeding $300,000 (approximately $5,000 per minute) due to lost revenue and productivity.[14][15] Effective failover requires built-in redundancy across hardware (e.g., duplicate servers or storage), software (e.g., clustered applications), or networks (e.g., multiple paths for connectivity), ensuring the backup can assume operations without reconfiguration delays.[16][17] This foundational redundancy forms the basis for HA architectures, distinguishing failover from mere backups by emphasizing real-time operational continuity rather than post-failure recovery.[18]Key Components
Failover systems rely on redundant hardware components to ensure continuity during failures. These include sets of matching servers designed for seamless role transfer, where identical or similar hardware across nodes minimizes compatibility issues during switches.[19] Storage arrays, often configured with RAID levels such as RAID 1 for mirroring or RAID 5 for parity-based redundancy, provide fault-tolerant data access by distributing information across multiple disks.[19] Additionally, redundant power supplies paired with uninterruptible power supplies (UPS) protect against outages, allowing systems to maintain operations long enough for failover to occur without interruption.[20] Software components form the core orchestration layer for failover. Clustering software, such as Pacemaker, manages resource allocation and automatic relocation of services to healthy nodes upon detecting issues.[21] Heartbeat protocols, implemented via tools like Corosync, facilitate node communication and membership verification to coordinate synchronized state across the cluster.[21] Load balancers, exemplified by HAProxy, distribute traffic while supporting failover at layers 4 and 7 to redirect flows dynamically.[21] Virtual IP (VIP) addressing, managed as cluster resources like IPaddr2, enables transparent client redirection by floating the IP to the active node.[21] Architectural elements underpin the failover infrastructure. Shared storage solutions, including Storage Area Networks (SAN) for block-level access and Network Attached Storage (NAS) for file-level sharing, allow multiple nodes to access the same data pool without conflicts.[19][22] Replication mechanisms ensure data consistency, with synchronous replication mirroring writes in real-time for zero data loss in low-latency environments, while asynchronous replication defers synchronization for longer distances at the cost of potential loss.[23] Network redundancy, achieved through multiple Network Interface Cards (NICs) configured in bonding modes, provides failover paths by aggregating links for increased bandwidth and automatic switchover if a link fails.[24] Monitoring tools integrate as agents within the cluster to track system health. These agents collect metrics such as CPU load, memory usage, and I/O latency, enabling proactive alerts and triggering failover thresholds when anomalies exceed predefined limits.[21] In Pacemaker clusters, for instance, resource agents perform periodic probes to verify service viability based on these indicators.[21] Integration challenges arise from ensuring component compatibility to eliminate single points of failure. Mismatched hardware or software versions can hinder seamless operation, requiring validation of identical configurations across nodes and thorough testing to confirm no bottlenecks in shared resources like storage or networks.[19][25]Implementation Mechanisms
Detection and Monitoring
Detection and monitoring in failover systems involve identifying failures that necessitate switching to redundant resources, ensuring minimal disruption to service availability. These processes rely on continuous surveillance of system health to detect anomalies promptly, triggering failover only when confirmed. Common detectable failure types include hardware faults such as disk or node failures, which can render primary components inoperable; software crashes like application hangs or process terminations; network outages manifested as packet loss or connectivity disruptions; and overload conditions where resource utilization exceeds sustainable levels, potentially leading to degraded performance.[26][27][28] Monitoring techniques encompass heartbeat signals, where primary and secondary systems exchange periodic pings—typically every 10 seconds—to verify operational status, with failure declared after missing a configurable number of signals; polling-based checks using protocols like SNMP to query metrics such as CPU load or interface status at regular intervals; and event logging via syslog to analyze logs for indicators of issues like error spikes or unexpected shutdowns.[27][28][29][30] Threshold-based detection defines failure criteria through predefined limits, such as response times exceeding 5 seconds or error rates surpassing 10%, enabling proactive identification of degrading conditions before total collapse. Tools like Nagios employ plugin-based polling to monitor cluster states and alert on threshold breaches, while Prometheus scrapes metrics from endpoints to detect anomalies via rules evaluating latency or error ratios in real-time.[31][32][33] To mitigate false positives, which could induce unnecessary failovers and introduce instability, systems implement multi-factor confirmation, combining heartbeat loss with supplementary checks like resource utilization queries or socket connection validations to ensure accurate failure verification.[28][26] Detection windows typically range from 1 to 30 seconds, balancing rapid response against the risk of erroneous triggers; for instance, Oracle WebLogic declares failure after 30 seconds of missed heartbeats, while tuned configurations in systems like Oracle Data Guard achieve detection in 6 to 15 seconds on reliable networks.[28][34][35]Switching and Recovery
The switching process in failover begins with quiescing the primary system to halt new incoming requests and ensure no ongoing transactions are interrupted mid-execution, preventing data inconsistencies during the transition.[36] This step typically involves stopping application updates or services on the primary node. Next, control is transferred by reassigning the virtual IP (VIP) address from the primary to the standby node, allowing clients to seamlessly connect to the new primary without reconfiguration.[37] The standby resources are then activated, starting services and mounting necessary storage or databases on the secondary node. Finally, operations resume on the secondary, with the system processing new requests as the new primary.[38] Recovery techniques during failover emphasize maintaining system integrity through state synchronization, often achieved using database replication logs to apply pending transactions from the primary to the standby before activation.[39] Data consistency is verified via checksums, which compare hash values of data blocks across nodes to detect corruption or desynchronization.[40] If the failover encounters issues, such as incomplete synchronization, rollback options allow reverting to the previous configuration by aborting the transition and restoring the original primary state, minimizing further disruption.[41] Automation levels in failover range from scripted manual interventions, where administrators execute predefined commands, to fully automated systems like Pacemaker in Linux clusters, which detect failures and orchestrate the entire process without human input.[37] Fully automated setups achieve failover times typically between 5 and 60 seconds, depending on configuration, network latency, and resource complexity.[42] Potential disruptions include split-brain scenarios, where multiple nodes simultaneously attempt failover due to communication failures, risking data corruption from concurrent writes. These are prevented through fencing mechanisms, such as STONITH (Shoot The Other Node In The Head), which isolates the suspected failed node by powering it off or disconnecting it from shared resources.[38] Success in switching and recovery is measured by mean time to recovery (MTTR), the average duration from failure detection to full service restoration, serving as a key performance indicator for high availability systems.[43]Types of Failover
Active-Passive Configurations
In active-passive failover configurations, the primary node or system manages all incoming traffic and operational workloads, while the secondary node remains in an idle or standby state, continuously mirroring data from the primary through replication mechanisms but not processing requests until a failure triggers activation.[44] This setup ensures redundancy without concurrent resource utilization on the secondary, often employing shared storage or asynchronous replication to maintain data consistency.[45] For instance, in cold standby scenarios, the secondary system relies on periodic snapshots to capture the primary's state, allowing it to initialize quickly upon failover without requiring real-time synchronization.[46] These configurations offer several advantages, including lower overall resource overhead since the secondary node does not engage in dual processing or load handling, which reduces hardware and licensing costs.[47] Synchronization is simpler, as there are no conflicts from simultaneous updates, making it easier to implement and maintain compared to more complex topologies.[48] They are particularly suitable for non-real-time applications where brief interruptions are tolerable, providing a straightforward path to high availability without the need for advanced load balancing.[44] However, active-passive setups have notable disadvantages, such as potential data lag from asynchronous replication, which can result in a recovery point objective (RPO) of several minutes depending on replication intervals and network conditions.[49] Recovery times are often longer due to startup delays on the passive node, including mounting shared storage, initializing services, and processing any queued updates, potentially taking 5-10 minutes in automated clusters.[45] Common examples include traditional database clustering, such as MySQL's master-slave replication, where the master handles all writes and reads while slaves asynchronously replicate data in a passive role, ready for promotion via manual or scripted failover if the master fails.[50] In web server environments, active-passive failover can be achieved through DNS-based switching, where tools monitor primary server health and adjust time-to-live (TTL) values to propagate traffic to a secondary server only upon detection of failure.[51] Implementation considerations emphasize the choice between manual intervention—such as administrator-promoted failover in smaller setups—and automatic activation using cluster management software like IBM HACMP or Microsoft Cluster Server, which detect failures and orchestrate the switch.[45] These configurations are especially cost-effective for small-scale deployments, as they minimize idle resource waste while providing reliable redundancy without the overhead of always-on systems.[48]Active-Active Configurations
In active-active configurations, both primary and secondary systems operate concurrently, processing traffic and workloads simultaneously to provide high availability and load distribution. A load balancer or routing mechanism, such as DNS-based policies or application delivery controllers, directs incoming requests across the active nodes using algorithms like round-robin or least-connections to ensure even distribution. Upon detecting a node failure through health checks or monitoring probes, the system automatically redistributes the affected load to the remaining healthy nodes without interrupting service.[44][52][53] These setups offer several key advantages, including the potential for zero-downtime failover since no cold start is required for the surviving nodes, which continue handling traffic seamlessly. Resource utilization is maximized as both nodes contribute to capacity, potentially doubling the overall throughput compared to single-node operations. Additionally, they enhance scalability by allowing horizontal expansion through additional active nodes, supporting higher traffic volumes in demanding environments.[44][54][53] However, active-active configurations introduce complexities in data synchronization to prevent conflicts in shared resources, such as databases or caches, where concurrent writes from multiple nodes can lead to inconsistencies if not properly managed. They also incur higher operational costs due to the need for duplicate infrastructure and ongoing replication overhead. Furthermore, there is an elevated risk of cascading failures if synchronization fails or if a widespread issue affects multiple nodes simultaneously.[55][56][57] Practical examples include load-balanced web server farms using NGINX Plus, where multiple instances handle HTTP requests with integrated health checks to trigger failover and redistribute traffic dynamically. Another is Redis Enterprise's Active-Active databases, which enable multi-region replication for distributed caching, ensuring data availability across geographically dispersed nodes.[52] Key implementation considerations involve maintaining session persistence through sticky sessions, where load balancers route subsequent requests from the same client to the original node using cookies or IP affinity to preserve stateful application data. For conflict resolution in shared data scenarios, protocols like quorum voting ensure consensus among nodes, requiring a majority agreement on updates to avoid split-brain conditions and maintain consistency.[58][59]Related Processes
Failback Procedures
Failback refers to the process of restoring operations to the original primary system after a failover event has been resolved, ensuring minimal disruption and data integrity. This reversion is critical in high availability setups to return to the preferred primary configuration without introducing new points of failure. In cloud environments like Azure, failback involves reinstating the primary instance and redirecting traffic back to it once stability is confirmed. Recent advancements include AI-driven automation for predictive failback and data reconciliation in multi-cloud setups.[60][61] The failback procedure typically follows a structured sequence of steps to mitigate risks. First, the recovery of the primary system is verified by confirming its health, resource availability, and operational readiness through health checks and diagnostic tools. Second, the state is synchronized from the secondary system, often using replication mechanisms to apply any changes that occurred during the failover period. Third, the primary is tested under load to simulate production traffic and validate performance, stability, and error handling. Finally, traffic is switched back to the primary, updating routing configurations such as DNS records or load balancers to complete the reversion. In AWS Elastic Disaster Recovery, this process includes booting a Failback Client on the source server to facilitate replication and monitoring progress via the console.[62][63] Failback can be executed manually or automatically, each with distinct advantages and drawbacks. Manual failback provides greater control, allowing administrators to perform thorough validations and intervene if issues arise, which is recommended in scenarios requiring custom assessments; however, it increases downtime due to human involvement. Automatic failback, often scripted for reversion, accelerates the process and reduces operational overhead, but carries risks such as unnecessary switches if the primary experiences intermittent issues, potentially leading to thrashing or oscillating failovers. Azure best practices advocate automatic failover paired with manual failback to balance speed and caution.[64][65] Data reconciliation is a core aspect of failback, addressing divergent states between primary and secondary systems that arise from operations during failover. Techniques include leveraging transaction logs to replay committed changes and applying differential synchronization to identify and merge only the discrepancies, thereby minimizing data loss or inconsistencies. In multi-region setups, this ensures transactional consistency across data stores, often requiring automated tools to compare datasets and resolve conflicts. AWS guidance emphasizes replicating data written to recovery instances back to the primary before final switchover.[66][63] Best practices for failback emphasize timing and oversight to optimize reliability. Procedures should be scheduled during low-traffic periods to limit user impact, with comprehensive monitoring for post-reversion stability, including metrics on latency, error rates, and resource utilization. Regular testing of failback runbooks in non-production environments helps refine these processes, and avoiding automatic failback prevents rapid oscillations. In Azure multi-region deployments, business readiness confirmation, including communication plans, is essential alongside automated monitoring tools.[67][65] Challenges in failback include achieving zero-downtime transitions and managing complexity in state synchronization. Analogous to blue-green deployments, where traffic shifts seamlessly between environments, failback requires parallel validation of the primary to avoid interruptions, but this can introduce coordination overhead and potential for incomplete reconciliations if not orchestrated properly. Risks of data divergence or configuration mismatches further complicate reversion, particularly in distributed systems.[68]Testing and Validation
Testing failover mechanisms is essential to ensure reliability in high-availability systems without disrupting production environments. Common approaches include chaos engineering, which involves deliberately injecting faults to observe system resilience; scripted simulations that automate failure scenarios in isolated test beds; and dry-run modes that preview failover actions without executing them. For instance, chaos engineering tools like Netflix's Chaos Monkey randomly terminate virtual machine instances to validate automatic failover and recovery processes. Recent developments incorporate AI to automate and predict failure scenarios in chaos experiments, enhancing efficiency as of 2025.[69][70] Scripted simulations, such as those using sandboxed virtual machines, allow teams to replicate outages and measure response without impacting live data.[71] Dry-run modes enable pre-execution previews of failover commands, helping identify configuration issues in advance.[72] Validation of failover effectiveness relies on key metrics that quantify performance and risk tolerance. The failover success rate, typically targeted above 99%, measures the percentage of simulated failures that result in seamless switching to backup resources.[67] Recovery Time Objective (RTO) assesses the duration to restore operations, often aiming for under 1 minute in modern setups to minimize downtime.[73] Recovery Point Objective (RPO) evaluates acceptable data loss, with goals like less than 5 seconds for near-real-time replication systems.[74] These metrics guide iterative refinements, ensuring systems meet service-level agreements. Specialized tools and frameworks facilitate structured testing. Veritas Cluster Server includes a Cluster Simulator for modeling failover scenarios in a controlled environment, allowing administrators to test resource switching without hardware risks.[75] The AWS Fault Injection Simulator (FIS) supports chaos experiments by simulating faults like instance failures or network disruptions, enabling validation of failover in cloud infrastructures.[76] These tools integrate with monitoring systems to capture real-time data during tests. Effective planning involves conducting regular drills, such as quarterly simulations, to maintain preparedness and comply with standards.[77] Each test should be documented, detailing outcomes, deviations from expectations, and action items for improvements, fostering a cycle of continuous enhancement. Tests may briefly incorporate failback procedures to verify full recovery cycles.[78] A frequent pitfall in failover testing is neglecting edge cases, such as multi-node failures where multiple components fail simultaneously, leading to cascading issues not captured in single-point simulations.[79] Similarly, overlooking network partitions—where communication between nodes is severed—can result in undetected inconsistencies or split-brain scenarios during recovery.[79] Addressing these requires comprehensive scenario planning to build robust resilience.Applications and Use Cases
In Database Systems
In database systems, failover presents unique challenges centered on maintaining ACID (Atomicity, Consistency, Isolation, Durability) properties during the transition from a primary to a standby instance. Ensuring atomicity requires careful handling of ongoing transactions, often involving rollbacks of partially committed operations to prevent inconsistencies across replicas.[80] For instance, in systems like SQL Server, the transaction log architecture guarantees durability by logging changes before commit, allowing rollbacks during failover if the primary fails mid-transaction.[80] Isolation is preserved through mechanisms like locking and row versioning, which prevent dirty reads during the switch, though read/write splits—where reads are offloaded to replicas—must be reconfigured to avoid stale data exposure.[81] Oracle databases address these via redo log validation on standbys, ensuring no corrupted blocks propagate during failover.[82] Key techniques for database failover include log shipping, block-level replication, and multi-master setups. Log shipping, as implemented in SQL Server's Always On Availability Groups, involves the primary replica transmitting transaction log records to secondaries, enabling synchronous or asynchronous synchronization to minimize data loss during failover.[83] In Oracle Data Guard, replication primarily uses redo log shipping for physical standbys, maintaining block-for-block identical copies, with additional block-level corruption detection to repair issues automatically during recovery.[82] Multi-master replication, such as Oracle GoldenGate's active-active configuration, allows updates at multiple sites using asynchronous or synchronous propagation, resolving conflicts via methods like timestamp priority to ensure eventual convergence post-failover.[84] Representative examples illustrate these approaches in practice. PostgreSQL employs streaming replication to send write-ahead log (WAL) records from the primary to standbys, supporting automatic promotion via tools likepg_ctl promote during failover, which restores available WAL for consistency.[85] In MongoDB, replica sets within sharded clusters handle failover through elections, where a secondary is promoted to primary upon detecting primary unavailability, pausing writes briefly (typically ~12 seconds) while maintaining read availability on secondaries.[86]
Performance impacts of failover techniques often involve added latency in synchronous replication to uphold strong consistency, with commit times ranging from 50-150 ms in distributed systems like Google's Spanner due to cross-replica acknowledgments.[87] Trade-offs arise with eventual consistency models, which prioritize availability during partitions by allowing temporary divergences—resolvable later via reconciliation—but at the risk of brief inconsistencies, as formalized in the PACELC theorem for balancing latency and consistency in normal and failure scenarios.[88] Asynchronous modes reduce this overhead but may introduce lag exceeding 1 second if the primary fails before shipping logs.[85]
Post-failover recovery strategies frequently leverage point-in-time recovery (PITR) to align datasets precisely. In PostgreSQL, PITR combines a base backup with replaying archived WAL files to any timestamp since the backup, enabling restoration of the new primary to match the failed one's state without full resynchronization.[89] This approach ensures minimal data loss and supports rollback to a consistent point, though it requires uninterrupted WAL archiving for effectiveness.[89]
