Hubbry Logo
InitInitMain
Open search
Init
Community hub
Init
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Init
Init
from Wikipedia
Version 7 Unix: /etc listing, showing init and rc
Version 7 Unix: contents of an /etc/rc Bourne shell script

In Unix-like computer operating systems, init (short for initialization) is the first process started during booting of the operating system. Init is a daemon process that continues running until the system is shut down. It is the direct or indirect ancestor of all other processes and automatically adopts all orphaned processes. Init is started by the kernel during the booting process; a kernel panic will occur if the kernel is unable to start it, or it should die for any reason. Init is typically assigned process identifier 1.

In Unix systems such as System III and System V, the design of init has diverged from the functionality provided by the init in Research Unix and its BSD derivatives. Up until the early 2010s,[1][failed verification] most Linux distributions employed a traditional init that was somewhat compatible with System V, while some distributions such as Slackware use BSD-style startup scripts, and other distributions such as Gentoo have their own customized versions.

Since then, several additional init implementations have been created, attempting to address design limitations in the traditional versions. These include launchd, the Service Management Facility, systemd, Runit and OpenRC.

Research Unix-style/BSD-style

[edit]

Research Unix init runs the initialization shell script located at /etc/rc,[2] then launches getty on terminals under the control of /etc/ttys.[3] There are no runlevels; the /etc/rc file determines what programs are run by init. The advantage of this system is that it is simple and easy to edit manually. However, new software added to the system may require changes to existing files that risk producing an unbootable system.

BSD init was, prior to 4.3BSD, the same as Research UNIX's init;[4][5] in 4.3BSD, it added support for running a windowing system such as X on graphical terminals under the control of /etc/ttys.[6][7] To remove the requirement to edit /etc/rc, BSD variants have long supported a site-specific /etc/rc.local file that is run in a sub-shell near the end of the boot sequence.

A fully modular system was introduced with NetBSD 1.5 and ported to FreeBSD 5.0, OpenBSD 4.9 and successors. This system executes scripts in the /etc/rc.d directory. Unlike System V's script ordering, which is derived from the filename of each script, this system uses explicit dependency tags placed within each script.[8] The order in which scripts are executed is determined by the rcorder utility based on the requirements stated in these tags.

SysV-style

[edit]
sysv-rc-conf, a TUI utility that selects which SysV-style init scripts will be run in each runlevel

When compared to its predecessors, AT&T's UNIX System III introduced a new style of system startup configuration,[9] which survived (with modifications) into UNIX System V and is therefore called the "SysV-style init".

At any moment, a running System V is in one of the predetermined number of states, called runlevels. At least one runlevel is the normal operating state of the system; typically, other runlevels represent single-user mode (used for repairing a faulty system), system shutdown, and various other states. Switching from one runlevel to another causes a per-runlevel set of scripts to be run, which typically mount filesystems, start or stop daemons, start or stop the X Window System, shutdown the machine, etc.

Runlevels

[edit]

The runlevels in System V describe certain states of a machine, characterized by the processes and daemons running in each of them. In general, there are seven runlevels, out of which three runlevels are considered "standard", as they are essential to the operation of a system:

  1. Turn off
  2. Single-user mode (also known as S or s)
  3. Reboot

Aside from these standard ones, Unix and Unix-like systems treat runlevels somewhat differently. The common denominator, the /etc/inittab file, defines what each configured runlevel does in a given system.

Default runlevels

[edit]
Operating system Default runlevel
AIX 5
antiX 2
Gentoo Linux 3[10]
HP-UX 3 (console/server/multiuser) or 4 (graphical)
Linux From Scratch 3
Slackware Linux 3
Solaris / illumos 3[11]
UNIX System V Releases 3.x, 4.x 2
UnixWare 7.x 3

On Linux distributions defaulting to runlevel 10 in the table on the right, runlevel 10 invokes a multiuser graphical environment running the X Window System, usually with a display manager like GDM or KDM. However, the Solaris and illumos operating systems typically reserve runlevel 10 to shut down and automatically power off the machine.

On most systems, all users can check the current runlevel with either the runlevel or who -r command.[12] The root user typically changes the current runlevel by running the telinit or init commands. The /etc/inittab file sets the default runlevel with the :initdefault: entry.

On Unix systems, changing the runlevel is achieved by starting only the missing services (as each level defines only those that are started / stopped).[citation needed] For example, changing a system from runlevel 3 to 4 might only start the local X server. Going back to runlevel 3, it would be stopped again.

Other implementations

[edit]

Traditionally, one of the major drawbacks of init is that it starts tasks serially, waiting for each to finish loading before moving on to the next. When startup processes end up Input/output (I/O) blocked, this can result in long delays during boot. Speeding up I/O, e.g. by using SSDs, may shorten the delays but it does not address the root cause.

Various efforts have been made to replace the traditional init daemons to address this and other design problems, including:

  • BootScripts in GoboLinux
  • busybox-init, suited to embedded operating systems, used by Alpine Linux (before being replaced with OpenRC), SliTaz 5 (Rolling), Tiny Core Linux, and VMware ESXi, and used by OpenWrt before it was replaced with procd
  • Dinit, a service manager and init system.[13]
  • Epoch, a single-threaded Linux init system focused on simplicity and service management[14]
  • ginitd, a software package that consists of an init system and a service management system[15]
  • Initng, a full replacement of init designed to start processes asynchronously
  • launchd, a replacement for init in Darwin and Darwin-based operating systems such as macOS and iOS starting with Mac OS X v10.4 (it launches SystemStarter to run old-style 'rc.local' and SystemStarter processes)
  • OpenRC, a process spawner that utilizes system-provided init, while providing process isolation, parallelized startup, and service dependency; used by Alpine Linux, Gentoo and its derivatives, and available as an option in Devuan and Artix Linux.
  • runit, a cross-platform full replacement for init with parallel starting of services, used by default in Void Linux[16]
  • Sun Service Management Facility (SMF), a complete replacement/redesign of init from the ground up in illumos/Solaris starting with Solaris 10, but launched as the only service by the original System V-style init
  • Shepherd, the GNU service and daemon manager which provides asynchronous, dependency-based initialisation; written in Guile Scheme and meant to be interactively hackable during normal system operation[17]
  • s6, a software suite that includes an init system[18][19]
  • systemd, a software suite, full replacement for init in Linux that includes an init daemon, with concurrent starting of services, service manager, and other features. Used by Debian (replaces SysV init) and Ubuntu, among other popular Linux distributions.
  • SystemStarter, a process spawner started by the BSD-style init in Mac OS X prior to Mac OS X v10.4
  • Upstart, a full replacement of init designed to start processes asynchronously. Initiated by Ubuntu and used by them until 2014. It was also used in Fedora 9,[20][21] Red Hat Enterprise Linux 6[22] and Google's ChromeOS.[23]

As of February 2019, systemd has been adopted by most major Linux distributions.[24]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In operating systems, init (short for "initialization") is the first userspace launched by the kernel during boot, assigned process ID (PID) 1, and serving as the ancestor of all subsequent processes. It reads configuration from files such as /etc/inittab to spawn , manage system runlevels, and handle events like power failures or process terminations. Originating in early Unix implementations from in the 1970s, init evolved into the standardized System V (SysV) init in UNIX during the , which introduced —a mechanism to transition the system through operational states from (runlevel 1) to full multi-user graphical environments (typically runlevels 2–5). This SysV model, relying on sequential shell scripts in /etc/init.d/ for service management, became the foundation for distributions but faced limitations in handling parallel startups, dependency resolution, and dynamic hardware like USB devices. By the mid-2000s, alternatives emerged to address these shortcomings: Upstart (introduced around 2006 by ) adopted an event-driven approach for better parallelism, powering from version 6.10 and 9–14. In 2010, was developed by and Kay Sievers at , emphasizing socket activation, on-demand service loading, and integrated logging via Journald; it gained widespread adoption starting with 7 (2014) and 8 (2015), and by 2025 serves as the default init system in most major distributions due to its efficiency in modern, containerized, and cloud environments. Other variants, such as and runit, persist in lightweight or embedded systems for their simplicity and modularity. Key functions of init across implementations include re-executing itself for upgrades (via signals like SIGUSR2 in SysV), supervising child processes to prevent boot hangs, and facilitating shutdown or reboot by sending termination signals (SIGTERM followed by SIGKILL) to undefined processes during changes. In contemporary , these are extended with for resource control, integration for , and replacing traditional runlevels for more flexible .

Fundamentals

Role and Responsibilities

In operating systems, the init process is the first user-space program executed by the kernel after the boot sequence completes, assigned process ID 1 (PID 1) to mark its foundational status. This initiation occurs when the kernel invokes an executable such as /sbin/init via the execve , establishing the bridge between kernel-mode operations and user-space execution. As PID 1, init assumes the role of the ultimate ancestor for all subsequent processes, directly or indirectly forking them into existence and serving as their adoptive parent when original parents terminate. This hierarchical structure ensures that every daemon, shell, and user session traces its lineage back to init, maintaining process tree integrity across the system. The primary responsibilities of init encompass critical system lifecycle management. It reaps orphaned child processes—those left in a state after their parent exits without invoking wait—by periodically calling system calls like waitpid to collect their exit statuses and free associated resources, preventing memory leaks and process table exhaustion. Additionally, init oversees the startup of essential system services, such as mounting filesystems and launching background daemons, while facilitating the transition from single-user kernel-controlled mode to a fully operational multi-user environment. During shutdown or , init coordinates the graceful termination of services, unmounting filesystems, and signaling the kernel to halt or restart hardware, ensuring and orderly power-off. Failure of the init process carries severe implications, as it is irreplaceable in the hierarchy. If init cannot be launched—due to a missing or corrupted binary specified by kernel parameters like init=—the process stalls, often resulting in a with messages indicating no working init was found, leading to an unrecoverable system hang. Similarly, if the running init (PID 1) terminates unexpectedly, the kernel detects the attempt to kill it and triggers a panic, syncing filesystems if possible before halting, as no alternative process can adopt orphans or manage services; recovery requires manual hardware intervention, such as resetting the system. This design underscores init's indispensable nature, where its absence equates to total system failure without built-in mechanisms.

Boot Integration

The boot process of a system culminates in the kernel handing off control to the init process after completing its initialization tasks. Following hardware detection, memory setup, and mounting the root filesystem—often facilitated by an initial RAM filesystem (initramfs)—the kernel executes the program specified by the init= kernel command-line parameter, typically /sbin/init. This execution creates the first user-space process with process ID 1 (PID 1), marking the transition from kernel mode to user mode and establishing the foundation for all subsequent user-space activities. Upon startup, init begins in a minimal environment, inheriting a sparse set of environment variables from the kernel, such as those derived from command-line parameters up to the -- . Its initial actions include parsing configuration files to determine system setup, forking and executing essential background processes (daemons), mounting additional filesystems beyond the , and configuring system-wide environment variables to prepare the runtime context. These steps ensure the operating system progresses from a bare kernel state to a functional user environment, with init overseeing the launch of core services like those for and networking, though detailed service management occurs later in the sequence. A key role of init in process lifecycle management is serving as the adoptive parent for orphaned processes—those whose original parent has terminated without reaping them. When a process becomes orphaned, the kernel reassigns its parent to PID 1, and init periodically invokes the wait() system call to detect and reap any resulting zombie (defunct) children, thereby freeing their process table entries and preventing resource leaks. This mechanism maintains system hygiene by automatically cleaning up terminated processes that would otherwise accumulate. Unlike typical user processes, init operates under special protections to safeguard system stability: it ignores the SIGKILL signal, rendering attempts to terminate it via kill -9 ineffective, as the kernel ensures PID 1 only responds to signals for which it has installed explicit handlers. Additionally, init launches with root privileges but without a controlling terminal, running in a non-interactive context that emphasizes its role as the root ancestor of all processes rather than an interactive application.

BSD-Style Init

Origins and Evolution

The BSD-style init system traces its roots to the operating system developed at during the 1970s, where the init process emerged as the first user-space program (PID 1) responsible for launching essential to initialize the system environment. This foundational design first appeared in , released in 1975, as a straightforward launcher that forked processes like getty for terminal management and executed basic startup routines without complex state tracking. The approach emphasized minimalism, allowing init to respawn failed processes and transition the system from kernel boot to a functional multi-user state through simple scripting. Key evolutionary milestones refined this model within the Berkeley Software Distribution (BSD) lineage. In 4.1BSD, released in June 1981, the /etc/rc script was formalized as a central, monolithic executed by init to handle system configuration tasks, such as mounting file systems from , configuring terminals via /etc/ttys, and activating swap space with swapon. This structure provided a reliable, sequential sequence that prioritized deterministic execution over parallelization, enabling site-specific customizations through an accompanying /etc/rc.local file. By 4.3BSD in 1986, enhancements to init supported emerging windowing systems, permitting the spawning of arbitrary programs beyond traditional getty processes and introducing a dedicated window field in configuration files to initialize graphical terminals and display managers. A significant advancement occurred in 1.5, released in December 2000, which replaced the monolithic /etc/rc with a modular /etc/rc.d directory containing individual scripts for each service, each annotated with dependency keywords like PROVIDE, REQUIRE, and BEFORE. The rcorder utility then dynamically ordered and executed these scripts during startup, introducing dependency-aware sequencing while preserving the core sequential philosophy and avoiding abstractions seen in System V init. At its core, the BSD-style init embodies a design philosophy of sequential, non-state-based startup that favors reliability and simplicity, eschewing intricate state machines or parallel processing in favor of predictable, linear execution to minimize failure points and ensure robust system bootstrapping. This emphasis on straightforward scripting and fault-tolerant respawning has sustained its adoption as the default init system in contemporary BSD derivatives, including , , and , where it continues to drive boot processes with minimal overhead. Its influence extends to select Linux distributions, notably , which employs a BSD-style initialization layout with /etc/rc.d scripts and a single-runlevel model for enhanced maintainability.

Boot Sequence and Configuration

In the BSD-style init system, the boot sequence begins after the kernel loads the root filesystem and executes init(8) as the first user process with process ID 1. Init then invokes the /etc/rc script serially to perform system initialization, sourcing configuration from /etc/rc.conf to set parameters such as the local hostname, network interface details, and flags for enabling services like daemons and networking. The /etc/rc script uses the rcorder(8) utility to determine the execution order of modular scripts in /etc/rc.d based on their declared dependencies, ensuring sequential startup of essential components such as network configuration and daemon processes. Key configuration files guide this process, with /etc/ttys defining terminal lines for login sessions by specifying devices and getty(8) types. During multi-user , init forks instances of getty based on active entries in /etc/ttys, enabling user logins on physical serial ports and virtual consoles (e.g., ttyv0 through ttyv9 for console switching via Ctrl+Alt+F1–F10). Meanwhile, /etc/rc.d houses independent scripts for services, each supporting standard actions like start, stop, and restart via a unified interface, along with dependency keywords such as REQUIRE: networking or PROVIDE: foo to enforce ordering— for instance, a script might require networking to be available before attempting to bind a network daemon. The shutdown process mirrors this structure for orderly termination. When shutdown(8) or reboot(8) is invoked, init executes /etc/rc.shutdown, which sources /etc/rc.subr for utility functions and runs the /etc/rc.d scripts in reverse dependency order using rcorder(8), allowing services to stop gracefully (e.g., closing network connections and saving state) before unmounting filesystems and halting the .

System V Init

Core Components

The System V init system originated in AT&T's UNIX System III release in 1981, where the /etc/inittab file was introduced to configure the process, and it was further standardized in System V Release 3 in 1987 with /sbin/init serving as the core executable responsible for system initialization and process management. As the first user-space process started by the kernel (process ID 1), /sbin/init reads the /etc/inittab configuration file upon startup to determine and spawn essential system processes. The /etc/inittab file is structured as a series of colon-separated entries in the format id:runlevels:action:[process](/page/Process), where each line defines a to manage. The id field provides a unique 1-4 character identifier for the entry; the runlevels field specifies the system states (numeric or letter codes) in which the should run; the action field dictates how init handles the (e.g., respawn to automatically restart upon termination, or wait to execute once and monitor completion); and the process field contains the command or script to execute. This tabular configuration allows init to systematically control daemon and service lifecycles without embedding logic directly in the init binary. Upon reading /etc/inittab, init forks child processes for each applicable entry, such as multiple instances of the getty program for virtual terminals, ensuring they run in the appropriate contexts. If a respawn-actioned process terminates unexpectedly—due to a crash or signal—init detects the exit via the wait status and immediately forks a replacement, providing automatic recovery for critical services like daemons. This forking mechanism, combined with signal handling, enables init to maintain system stability by treating itself as the ultimate parent for orphaned processes. System V init supports various action types to handle diverse events beyond standard spawning, including boot for processes executed early in initialization (ignoring runlevels), bootwait for boot-time commands where init waits for completion, off to disable an entry, once for one-time execution upon entering a runlevel, and powerfail for signaling power-related events without waiting. These actions facilitate event-driven responses, such as invoking scripts during power failures detected via hardware signals. In contrast to the BSD-style init's reliance on sequential rc scripts for process orchestration, System V init's inittab-driven approach emphasizes declarative configuration for parallel and resilient process management.

Runlevels

In System V init, runlevels define distinct operational modes that determine the set of services and processes running on the system, enabling controlled transitions between states such as , multi-user operation, or shutdown. These modes, numbered from 0 to 6 with an additional special runlevel S, provide a standardized way to manage system behavior without requiring a full , allowing administrators to tailor the environment to specific needs like diagnostics or resource optimization. The standard runlevels, as specified in the (LSB), are as follows:
RunlevelPurpose
0Halt the system
1
2Multi-user mode without network services exported
3Full multi-user mode
4Reserved for local use; defaults to full multi-user mode
5Multi-user mode with display manager or equivalent
6Reboot the system
S (equivalent to 1 but used during for initial setup)
Runlevel transitions are initiated using the telinit command, which sends a signal to the init process to switch states by terminating processes associated with the current and launching those defined for the target , typically via scripts in /etc/rcN.d/ directories (where N is the number). This process ensures orderly changes, preserving system stability during mode shifts. The /etc/inittab file integrates with by specifying the default level and respawning critical processes as needed within each mode. The system offers advantages in flexibility and safety, permitting seamless switches—for instance, from full multi-user mode ( 3) to ( 1) for recovery tasks—while minimizing downtime and manual intervention across diverse administrative scenarios.

Scripts and Defaults

In System V init, service startup and shutdown are managed through shell scripts stored in the /etc/init.d/ directory. These scripts are not executed directly; instead, the init process uses symbolic links in -specific subdirectories, such as /etc/rc3.d/ for 3, to invoke them in the appropriate order during transitions. The links follow a standardized : files prefixed with S## (where ## is a two-digit number) indicate startup actions and are called with the start argument, while those prefixed with K## denote shutdown actions and receive the stop argument; the numeric suffix determines the sequence, with lower values executing first to respect dependencies. Default runlevels, which define the initial system state after boot, vary across operating systems implementing System V init. Many distributions default to 3 for multiuser mode with console access or 5 for graphical environments, while others prioritize non-graphical setups. The following table provides representative examples:
Operating System/DistributionDefault RunlevelDescription
5Multiuser mode with graphical login enabled
Solaris3Multiuser mode with networking and NFS support
AIX2Multiuser mode without additional customization
Gentoo3Standard multiuser mode in OpenRC-based setups
(pre-systemd releases)2Multiuser mode without display manager
Customization of System V init involves modifying the /etc/inittab file to set the default via the id:runlevel:initdefault: entry, which specifies the target state on . Administrators can also extend functionality by placing custom scripts in /etc/init.d/—ensuring they support standard arguments like start, stop, restart, and status—and creating symlinks in the desired /etc/rcX.d/ directories to integrate them into specific runlevels. A key limitation of this script-based system is its reliance on serial execution, where services are started or stopped sequentially according to symlink order, often resulting in extended boot times as each script completes before the next begins, without inherent support for parallelization.

Modern Init Systems

is a system and service manager for operating systems, serving as PID 1 and responsible for initializing the system and managing services. Developed by and Kay Sievers at , it was first introduced in 2010 to address limitations in traditional init systems like SysVinit, such as serial service startup and lack of . The project aimed to leverage modern features for improved efficiency and portability across distributions. 15, released in May 2011, became the first major distribution to adopt as the default init system. Adoption accelerated in the mid-2010s, with switching to in version 8 (Jessie) in April 2015. followed suit in 15.04 (Vivid Vervet), released in April 2015, replacing its previous Upstart init. By 2025, has become the standard init system in nearly all major distributions, including , , and , replacing SysVinit and Upstart in server and desktop environments. Key innovations in include parallel service startup, which allows independent services to activate concurrently rather than sequentially, leading to significantly faster boot times compared to SysVinit's serial approach. It employs dependency-based unit activation, where services start only after their prerequisites are met, managed through a transaction that resolves conflicts before execution. Socket and activation enable on-demand service launching: services remain inactive until a socket connection or D-Bus message triggers them, reducing resource usage and improving responsiveness. Systemd organizes resources into units, configurable via declarative files typically located in /lib/systemd/system/ or /etc/systemd/system/. Service units (.service files) define how daemons like web servers or are executed, including options for restarts, environment variables, and resource limits. Targets, analogous to SysV runlevels, group units for synchronization; for example, multi-user.target enables a non-graphical multi-user environment, while graphical.target adds a display manager. The systemctl command provides a unified interface for managing units, such as starting, stopping, enabling, or querying status. Advantages of systemd include integrated logging via journald, a binary journal that captures structured logs with timestamps, priorities, and metadata, viewable through journalctl for efficient querying and rotation. Security features leverage and for sandboxing: directives like PrivateTmp=yes isolate temporary files, ProtectSystem=strict read-only mounts the filesystem, and NoNewPrivileges=yes prevents , enhancing service isolation. These capabilities have facilitated better container integration, allowing systemd to manage Docker or Podman containers as native units, with socket activation for seamless networking. Despite its widespread use, has faced criticisms for its complexity and , as it encompasses not only init but also , networking, and device management, leading to a monolithic design that some argue complicates debugging and increases the . Detractors contend that this breadth violates Unix principles of , making it harder for non-Linux ports and contributing to heated debates within the community since its inception.

Other Alternatives

Upstart, developed by and introduced in 2006 for , represents an early event-based init system aimed at replacing the sequential limitations of System V init. It manages services through job definitions in /etc/init/, where administrators specify "start on" and "stop on" conditions tied to system events, such as network availability or hardware changes, enabling more responsive and parallel-like service orchestration. This approach bridged traditional SysV scripting with modern requirements for faster boots and dependency awareness, but Upstart was deprecated in 15.04 (2015) due to the broader adoption of for its superior feature set and ecosystem integration. Launchd, created by Apple and first deployed in Mac OS X 10.4 Tiger (2005), functions as the primary init and service management daemon on macOS, supplanting the legacy BSD-style init for more efficient system startup. Configurations are defined in XML-based (plist) files located in directories like /Library/LaunchDaemons/ for system-wide daemons and ~/Library/LaunchAgents/ for user-specific agents, allowing on-demand launching based on triggers such as login or resource availability. uniquely integrates with XPC services for secure and handles events, like sleep/wake transitions, to coordinate service states across the system's lifecycle. OpenRC, initiated in 2007 by Roy Marples for and later adopted by , offers a lightweight, dependency-aware evolution of SysV init while preserving compatibility through /etc/init.d/ shell scripts. It resolves traditional serial boot delays by supporting parallel execution of independent services via explicit dependency declarations in script metadata, resulting in faster initialization without the overhead of more comprehensive suites. 's script-based design emphasizes portability and minimalism, making it suitable for both desktop and server environments in distributions prioritizing simplicity. Among supervision-oriented alternatives, provides a minimalist init framework with built-in , enabling parallel service startups and automatic restarts for reliability, as implemented in where it serves as the default PID 1 for boot, runtime, and shutdown phases. Similarly, s6 from skarnet.org delivers a compact suite for secure service oversight, featuring tools like s6-supervise for daemon monitoring and s6-rc for dependency-based management, which collectively minimize privileges and attack surfaces compared to monolithic systems. , written in Guile Scheme, acts as a extensible service shepherd for the GNU/Hurd kernel, dynamically managing daemons through declarative Scheme configurations and offering a (herd) for one-stop service control. Other notable systems include Epoch Init, a single-threaded, dependency-lightweight option for kernels 2.6 and later, relying on simple priority-based ordering in declarative configs to avoid complex graphs while maintaining core boot functionality with minimal footprint. Dinit, implemented in portable C++, functions as both a cross-platform init and daemon supervisor on systems, incorporating dependency resolution and event-driven starts to facilitate service orchestration without platform-specific ties. For embedded applications, employs init as PID 1, which parses /etc/inittab to invoke scripts, optimizing for low-resource devices through 's multi-tool binary that consolidates utilities into a single executable. These alternatives collectively tackle SysV init's inherent seriality by introducing parallelism, event reactivity, or focused supervision, fostering quicker boots and resilient service handling tailored to specific ecosystems—from macOS's integrated desktop needs to lightweight embedded deployments—though their adoption remains concentrated in targeted distributions rather than widespread dominance.

References

  1. https://wiki.gentoo.org/wiki/Handbook:Parts/Working/Initscripts
  2. https://wiki.gentoo.org/wiki/OpenRC
  3. https://wiki.alpinelinux.org/wiki/OpenRC
  4. https://wiki.alpinelinux.org/wiki/BusyBox
Add your contribution
Related Hubs
User Avatar
No comments yet.