Recent from talks
Nothing was collected or created yet.
Booting process of Linux
View on WikipediaThis article may be too technical for most readers to understand. (August 2025) |
The Linux booting process involves multiple stages and is in many ways similar to the BSD and other Unix-style boot processes, from which it is derived. Although the Linux booting process depends very much on the computer architecture, those architectures share similar stages and software components,[1] including system startup, bootloader execution, loading and startup of a Linux kernel image, and execution of various startup scripts and daemons.[2] Those are grouped into 4 steps: system startup, bootloader stage, kernel stage, and init process.[3]
When a Linux system is powered up or reset, its processor will execute a specific firmware/program for system initialization, such as the power-on self-test, invoking the reset vector to start a program at a known address in flash/ROM (in embedded Linux devices), then load the bootloader into RAM for later execution.[2] In IBM PC–compatible personal computers (PCs), this firmware/program is either a BIOS or a UEFI monitor, and is stored in the mainboard.[2] In embedded Linux systems, this firmware/program is called boot ROM.[4][5] After being loaded into RAM, the bootloader (also called first-stage bootloader or primary bootloader) will execute to load the second-stage bootloader[2] (also called secondary bootloader).[6] The second-stage bootloader will load the kernel image into memory, decompress and initialize it, and then pass control to this kernel image.[2] The second-stage bootloader also performs several operations on the system such as system hardware check, mounting the root device, loading the necessary kernel modules, etc.[2] Finally, the first user-space process (init process) starts, and other high-level system initializations are performed (which involve with startup scripts).[2]
For each of these stages and components, there are different variations and approaches; for example, GRUB, systemd-boot, coreboot or Das U-Boot can be used as bootloaders (historical examples are LILO, SYSLINUX or Loadlin), while the startup scripts can be either traditional init-style, or the system configuration can be performed through modern alternatives such as systemd or Upstart.
System startup
[edit]System startup has different steps based on the hardware that Linux is being booted on.[7]
IBM PC compatible hardware is one architecture Linux is commonly used on; on these systems, the BIOS or UEFI firmware plays an important role.
In BIOS systems, the BIOS will respectively perform power-on self test (POST), which is to check the system hardware, then enumerate local device and finally initialize the system.[7] For system initialization, BIOS will start by searching for the bootable device on the system which stores the OS. A bootable device can be storage devices like floppy disk, CD-ROM, USB flash drive, a partition on a hard disk (where a hard disk stores multiple OS, e.g Windows and Fedora), a storage device on local network, etc.[7] A hard disk to boot Linux stores the Master Boot Record (MBR), which contains the first-stage/primary bootloader in order to be loaded into RAM.[7]
In UEFI systems, the Linux kernel can be executed directly by UEFI firmware via the EFI boot stub,[8] but usually uses GRUB 2 or systemd-boot as a bootloader.[9][10]
If UEFI Secure Boot is supported, a "shim" or "Preloader" is often booted by the UEFI before the bootloader or EFI-stub-bearing kernel.[11] Even if UEFI Secure Boot is disabled this may be present and booted in case it is later enabled. It merely acts to add an extra signing key database providing keys for signature verification of subsequent boot stages without modifying the UEFI key database, and chains to the subsequent boot step the same as the UEFI would have.
The system startup stage on embedded Linux system starts by executing the firmware / program on the on-chip boot ROM, which then load bootloader / operating system from the storage device like eMMC, eUFS, NAND flash, etc.[5] The sequences of system startup are varies by processors[5] but all include hardware initialization and system hardware testing steps.[7] For example in a system with an i.MX7D processor and a bootable device which stores the OS (including U-Boot), the on-chip boot ROM sets up the DDR memory controller at first which allows the boot ROM's program to obtain the SoC configuration data from the external bootloader on the bootable device.[5] The on-chip boot ROM then loads the U-Boot into DRAM for the bootloader stage.[12]
Bootloader stage
[edit]In IBM PC compatibles, the first stage bootloader, which is a part of the MBR, is a 512-byte image containing the vendor-specific program code and a partition table.[6] As mentioned earlier in the introduction part, the first stage bootloader will find and load the second stage bootloader.[6] It does this by searching in the partition table for an active partition.[6] After finding an active partition, first stage bootloader will keep scanning the remaining partitions in the table to ensure that they're all inactive.[6] After this step, the active partition's boot record is read into RAM and executed as the second stage bootloader.[6] The job of the second stage bootloader is to load the Linux kernel image into memory, and optional initial RAM disk.[13] Kernel image isn't an executable kernel, but a "compressed file" of the kernel instead, compressed into either zImage or bzImage formats with zlib.[14]
In x86 PC, first- and second-stage bootloaders are combined into the GRand Unified Bootloader (GRUB), and formerly Linux Loader (LILO).[13] GRUB 2, which is now used, differs from GRUB 1 by being capable of automatic detection of various operating systems and automatic configuration. The stage1 is loaded and executed either by the BIOS from the Master boot record (MBR). The intermediate stage loader (stage1.5, usually core.img) is loaded and executed by the stage1 loader. The second-stage loader (stage2, the /boot/grub/ files) is loaded by the stage1.5 and displays the GRUB startup menu that allows the user to choose an operating system or examine and edit startup parameters. After a menu entry is chosen and optional parameters are given, GRUB loads the linux kernel into memory and passes control to it. GRUB 2 is also capable of chain-loading of another bootloader. In UEFI systems, the stage1 and stage1.5 usually are the same UEFI application file (such as grubx64.efi for x64 UEFI systems).
Beside GRUB, there are some more popular bootloaders:
- systemd-boot (formerly Gummiboot), a bootloader included with systemd that requires minimal configuration (for UEFI systems only).
- SYSLINUX/ISOLINUX is a bootloader that specializes in booting full Linux installations from FAT filesystems. It is often used for boot or rescue floppy discs, live USBs, and other lightweight boot systems. ISOLINUX is generally used by Linux live CDs and bootable install CDs.
- rEFInd, a boot manager for UEFI systems.
- coreboot is a free implementation of the UEFI or BIOS and usually deployed with the system board, and field upgrades provided by the vendor if need be. Parts of coreboot becomes the systems BIOS and stays resident in memory after boot.
- Das U-Boot is a bootloader for embedded systems. It is used on systems that do not have a BIOS/UEFI but rather employ custom methods to read the bootloader into memory and execute it.
Historical bootloaders, no longer in common use, include:
- LILO does not understand or parse filesystem layout. Instead, a configuration file (
/etc/lilo.conf) is created in a live system which maps raw offset information (mapper tool) about location of kernel and ram disks (initrd or initramfs). The configuration file, which includes data such as boot partition and kernel pathname for each, as well as customized options if needed, is then written together with bootloader code into MBR bootsector. When this bootsector is read and given control by BIOS, LILO loads the menu code and draws it then uses stored values together with user input to calculate and load the Linux kernel or chain-load any other boot-loader. - GRUB 1 includes logic to read common file systems at run-time in order to access its configuration file.[15] This gives GRUB 1 ability to read its configuration file from the filesystem rather than have it embedded into the MBR, which allows it to change the configuration at run-time and specify disks and partitions in a human-readable format rather than relying on offsets. It also contains a command-line interface, which makes it easier to fix or modify GRUB if it is misconfigured or corrupt.[16]
- Loadlin is a bootloader that can replace a running DOS or Windows 9x kernel with the Linux kernel at run time. This can be useful in the case of hardware that needs to be switched on via software and for which such configuration programs are proprietary and only available for DOS. This booting method is less necessary nowadays, as Linux has drivers for a multitude of hardware devices, but it has seen some use in mobile devices. Another use case is when the Linux is located on a storage device which is not available to the BIOS for booting: DOS or Windows can load the appropriate drivers to make up for the BIOS limitation and boot Linux from there.
Kernel
[edit]The kernel stage occurs after the bootloader stage. The Linux kernel handles all operating system processes, such as memory management, task scheduling, I/O, interprocess communication, and overall system control. This is loaded in two stages – in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions are set up such as basic memory management, minimal amount of hardware setup.[14] The kernel image is self-decompressed, which is a part of the kernel image's routine.[14] For some platforms (like ARM 64-bit), kernel decompression has to be performed by the bootloader instead, like U-Boot.[17]
For details of those steps, take an example with i386 microprocessor. When its bzImage is invoked, function start() (of ./arch/i386/boot/head.S) is called to do some basic hardware setup then calls startup_32() (located in ./arch/i386/boot/compressed/head.S).[14] startup_32()will do basic setup to environment (stack, etc.), clears the Block Started by Symbol (BSS) then invokes decompress_kernel() (located in ./arch/i386/boot/compressed/misc.c) to decompress the kernel.[14] Kernel startup is then executed via a different startup_32() function located in ./arch/i386/kernel/head.S.[14] The startup function startup_32() for the kernel (also called the swapper or process 0) establishes memory management (paging tables and memory paging), detects the type of CPU and any additional functionality such as floating point capabilities, and then switches to non-architecture specific Linux kernel functionality via a call to start_kernel() located in ./init/main.c.[14]
start_kernel()executes a wide range of initialization functions. It sets up interrupt handling (IRQs), further configures memory, mounts the initial RAM disk ("initrd") that was loaded previously as the temporary root file system during the bootloader stage.[14] The initrd, which acts as a temporary root filesystem in RAM, allows the kernel to be fully booted and driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk).[14] initrd contains the necessary modules needed to interface with peripherals,[14] e.g SATA driver, and support a large number of possible hardware configurations.[14] This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel.[14] initramfs, also known as early user space, has been available since version 2.5.46 of the Linux kernel,[18] with the intent to replace as many functions as possible that previously the kernel would have performed during the startup process. Typical uses of early user space are to detect what device drivers are needed to load the main user space file system and load them from a temporary filesystem. Many distributions use dracut to generate and maintain the initramfs image.
The root file system is later switched via a call to pivot_root() which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible.[14] The memory used by the temporary root file system is then reclaimed.[clarification needed]
Finally, kernel_thread (in arch/i386/kernel/process.c) is called to start the Init process (the first user-space process), and then starts the idle task via cpu_idle().[14]
Thus, the kernel stage initializes devices, mounts the root filesystem specified by the bootloader as read only, and runs Init (/sbin/init) which is designated as the first process run by the system (PID = 1).[19] A message is printed by the kernel upon mounting the file system, and by Init upon starting the Init process.[19]
According to Red Hat, the detailed kernel process at this stage is therefore summarized as follows:[15]
- "When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed initrd image in a predetermined location in memory, decompresses it, mounts it, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID before unmounting the initrd disk image and freeing up all the memory the disk image once occupied. The kernel then creates a root device,[clarification needed] mounts the root partition read-only, and frees any unused memory. At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with it." An initramfs-style boot is similar, but not identical to the described initrd boot.
At this point, with interrupts enabled, the scheduler can take control of the overall management of the system, to provide pre-emptive multi-tasking, and the init process is left to continue booting the user environment in user space.
Init process
[edit]Once the kernel has started, it starts the init process,[20] a daemon which then bootstraps the user space, for example by checking and mounting file systems, and starting up other processes. The init system is the first daemon to start (during booting) and the last daemon to terminate (during shutdown).
Historically this was the "SysV init", which was just called "init". More recent Linux distributions are likely to use one of the more modern alternatives such as systemd. Below is a summary of the main init processes:
- SysV init (a.k.a. simply "init") is similar to the Unix and BSD init processes, from which it derived. In a standard Linux system, init is executed with a parameter, known as a runlevel, which takes a value from 0 to 6 and determines which subsystems are made operational. Each runlevel has its own scripts which codify the various processes involved in setting up or leaving the given runlevel, and it is these scripts which are referenced as necessary in the boot process. Init scripts are typically held in directories with names such as
"/etc/rc...". The top level configuration file for init is at/etc/inittab.[21] During system boot, it checks whether a default runlevel is specified in /etc/inittab, and requests the runlevel to enter via the system console if not. It then proceeds to run all the relevant boot scripts for the given runlevel, including loading modules, checking the integrity of the root file system (which was mounted read-only) and then remounting it for full read-write access, and sets up the network.[19] After it has spawned all of the processes specified, init goes dormant, and waits for one of three events to happen: processes that started to end or die, a power failure signal,[clarification needed] or a request via/sbin/telinitto further change the runlevel.[22]
- systemd is a modern alternative to SysV init. Like init, systemd is a daemon that manages other daemons. All daemons, including systemd, are background processes. Lennart Poettering and Kay Sievers, software engineers that initially developed systemd,[23] sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies, to allow more processing to be done in parallel during system booting, and to reduce the computational overhead of the shell. Systemd's initialization instructions for each daemon are recorded in a declarative configuration file rather than a shell script. For inter-process communication, systemd makes Unix domain sockets and D-Bus available to the running daemons. Systemd is also capable of aggressive parallelization.
See also
[edit]References
[edit]- ^ M. Tim Jones 2006, "Introduction", "The process of booting a Linux® system consists of a number of stages. But whether you're booting a standard x86 desktop or a deeply embedded PowerPC® target, much of the flow is surprisingly similar."
- ^ a b c d e f g M. Tim Jones 2006, "Overview", "Figure 1. The 20,000-foot view of the Linux boot process"
- ^ M. Tim Jones 2006, "Linux booting process are grouped into 4 stages, based on IBM source"
- ^ Bin, Niu; Dejian, Li; Zhangjian, LU; Lixin, Yang; Zhihua, Bai; Longlong, He; Sheng, Liu (August 2020). Research and design of Bootrom supporting secure boot mode. 2020 International Symposium on Computer Engineering and Intelligent Communications (ISCEIC). pp. 5–8. doi:10.1109/ISCEIC51027.2020.00009. ISBN 978-1-7281-8171-4. S2CID 231714880.
- ^ a b c d Alberto Liberal De Los Ríos 2017, p. 28, "Linux Boot Process".
- ^ a b c d e f M. Tim Jones 2006, "Stage 1 boot loader".
- ^ a b c d e M. Tim Jones 2006, "System startup".
- ^ "EFI stub kernel - Gentoo Wiki". wiki.gentoo.org. Retrieved 2020-11-02.
- ^ Kinney, Michael (1 September 2000). "Solving BIOS Boot Issues with EFI" (PDF). pp. 47–50. Archived from the original (PDF) on 23 January 2007. Retrieved 14 September 2010.
- ^ "MS denies secure boot will exclude Linux". The Register. 23 September 2011. Retrieved 24 September 2011.
- ^ "Using a Signed Bootloader - Arch Wiki". wiki.archlinux.org. Retrieved 2024-12-05.
- ^ Alberto Liberal De Los Ríos 2017, p. 29, "Linux Boot Process".
- ^ a b M. Tim Jones 2006, "Stage 2 boot loader".
- ^ a b c d e f g h i j k l m n M. Tim Jones 2006, "Kernel".
- ^ a b "Product Documentation". Redhat.com. 2013-09-30. Archived from the original on 2008-08-30. Retrieved 2014-01-22.
- ^ "Product Documentation". Redhat.com. 2013-09-30. Retrieved 2014-01-22.
- ^ Alberto Liberal De Los Ríos 2017, p. 20, "Bootloader".
- ^ Corbet, Jonathan (6 November 2002). "Initramfs arrives". Retrieved 14 November 2011.
- ^ a b c http://oldfield.wattle.id.au/luv/boot.html Linux Boot Process - by Kim Oldfield (2001)
- ^ M. Tim Jones 2006, "Init".
- ^ "From Power Up To Bash Prompt: Init". users.cecs.anu.edu.au.
- ^ "init". man.he.net.
- ^ "systemd README". freedesktop.org. Retrieved 2012-09-09.
Works cited
[edit]- M. Tim Jones (31 May 2006). "Inside the Linux boot process". IBM. Archived from the original on 2007-10-11. Retrieved 2024-01-14.
- Alberto Liberal De Los Ríos (2017). Linux Driver Development for Embedded Processors (2nd ed.). Editorial Círculo Rojo; 1st edition (published March 3, 2017). ISBN 978-8491600190.
External links
[edit]- Reading the Linux Kernel Sources, Wikiversity
- Greg O'Keefe - From Power Up To Bash Prompt at the Wayback Machine (archived October 23, 2009)
- Bootchart: Boot Process Performance Visualization
- The bootstrap process on EFI systems, LWN.net, February 11, 2015, by Matt Fleming
Booting process of Linux
View on GrokipediaFirmware and Hardware Initialization
Power-On Self-Test (POST)
The Power-On Self-Test (POST) is a diagnostic routine executed by the motherboard's firmware immediately upon powering on a computer system to verify the functionality of essential hardware components. This process initializes and tests critical elements such as the central processing unit (CPU), random access memory (RAM), storage devices, and basic peripherals to ensure they are operational before proceeding to load the operating system. If any component fails these checks, the boot process halts to prevent potential damage or unstable operation.[3][4][5] Originating in the early 1980s with the introduction of the IBM Personal Computer (PC) in 1981, POST was integrated into the Basic Input/Output System (BIOS) as a fundamental hardware validation step for compatible systems. This innovation addressed the need for reliable self-diagnosis in the emerging personal computing era, where hardware reliability was paramount for widespread adoption. Over time, POST evolved to incorporate checks for basic input/output interfaces, expanding beyond initial CPU and memory tests to include rudimentary peripheral validation, reflecting advancements in PC architecture.[6][7] POST typically communicates results through auditory beep codes generated via the motherboard's speaker or visual indicators like LED patterns on the system board, allowing users to identify issues without advanced tools. For instance, American Megatrends Inc. (AMI) BIOS employs distinct beep patterns for error reporting; three short beeps signal a base 64K memory failure, indicating a problem in the initial RAM segment that requires reseating or replacement of memory modules.[8][9] Specific POST failures can prevent the boot process entirely, as seen in video-related errors where the system detects no display output. In certain BIOS implementations, a pattern of one long beep followed by three short beeps denotes a no-video error, typically due to a faulty graphics card, loose connection, or incompatible display adapter, halting further initialization until resolved. Such errors underscore POST's role in isolating hardware faults early, ensuring system integrity before firmware execution proceeds upon successful completion.[8][10]Firmware Execution and Boot Handoff
Upon completion of the Power-On Self-Test (POST), the firmware takes over to initialize essential hardware components and facilitate the transition to the bootloader. In the legacy Basic Input/Output System (BIOS) architecture, the firmware begins by initializing the chipset, including configuring the memory controller and enabling basic I/O capabilities. It then sets up interrupt vectors in the interrupt descriptor table to handle hardware events. The BIOS scans for bootable devices using the INT 13h BIOS interrupt service routine, which provides low-level disk access functions to read sectors from storage media. Upon identifying a bootable device, it loads the Master Boot Record (MBR) from the first sector (typically 512 bytes) of the device into memory at the physical address 0x7C00 and transfers control to it, marking the handoff point. The Unified Extensible Firmware Interface (UEFI), a modern successor to BIOS, employs a more modular approach to firmware execution. UEFI firmware initializes hardware through device drivers loaded during the boot phase and supports the GUID Partition Table (GPT) for partitioning, enabling handling of disks larger than 2 terabytes unlike the MBR's limitations. It executes EFI applications, such as bootloaders, directly from the Extensible Firmware Interface System Partition (ESP), a FAT-formatted partition designated for boot files. UEFI incorporates secure boot features to verify the integrity and authenticity of loaded executables using digital signatures, preventing unauthorized code execution. Additionally, UEFI provides runtime services, such as time and NVRAM access, which persist after handoff to the operating system for ongoing system management. Boot device detection in both BIOS and UEFI follows a predefined priority order, often favoring removable media like USB drives before internal hard disk drives (HDDs) or solid-state drives (SSDs) to support installation or rescue scenarios. Firmware implementations typically include fallback mechanisms, such as invoking a boot menu through a keypress (e.g., F12 on many systems) to allow manual selection of the boot device if automatic detection fails. This process ensures compatibility across diverse hardware configurations while minimizing boot delays. Key events during firmware execution include the establishment of a memory map, which details the physical memory layout including reserved regions and available RAM for the operating system, and the preparation of Advanced Configuration and Power Interface (ACPI) tables that describe hardware configuration for power management and resource allocation. Once these structures are set, the firmware performs the final handoff by jumping to the bootloader's entry point—0x7C00 for BIOS MBR or a specified EFI application address in UEFI—without returning control, thereby completing the transition from firmware to boot software.Bootloader Phase
First-Stage Bootloader Loading
In the BIOS-based booting process for Linux systems using the Master Boot Record (MBR) partitioning scheme, the firmware loads the first-stage bootloader from the first sector of the boot device into memory at physical address 0x7C00 and transfers execution to it. This initial code segment is limited to 446 bytes to fit within the MBR's structure, which totals 512 bytes including the partition table and a boot signature. The BIOS verifies the MBR's validity by checking the two-byte signature at offsets 510-511 (0x55 followed by 0xAA, or 0xAA55 in little-endian representation) before loading; an invalid signature prevents execution.[11] The first-stage bootloader's primary role is to chain-load a more capable second-stage component from fixed locations on the disk, without support for filesystems or complex disk operations. For example, in GRUB Legacy (version 0.97), the stage 1 bootloader, embedded in the MBR, uses a block list notation to read raw sectors containing stage 1.5, typically placed in the unused space immediately following the MBR and before the first partition (often 1-8 sectors in the "MBR gap"). This stage 1.5 provides basic filesystem drivers to enable loading the full stage 2. Limitations include reliance on BIOS interrupts for disk I/O in CHS or LBA modes and inability to parse filesystems, restricting it to predefined sector addresses. Error handling is minimal: it performs basic read retries and validates the loaded stage's version signature (e.g., checking for a mismatch triggers Error 6); upon failure, such as a "Read Error" or invalid block list, it displays a simple message and halts, requiring manual reboot.[12] Historically, the LInux LOader (LILO) served a similar function as a first-stage bootloader, occupying a single 512-byte sector in the MBR or a partition's boot sector. Upon execution, LILO's first stage displays an "L" prompt, performs initial disk geometry detection, and loads the multi-sector second stage from a predefined location, transferring control after displaying an "I". Like GRUB stage 1, it lacks filesystem awareness and uses geometric or linear addressing, with errors indicated by hex codes (e.g., 40 for seek failures or read errors, prompting retries before halting). LILO's design emphasized simplicity for early Linux distributions but was largely superseded by GRUB due to its rigidity.[13] In UEFI-based systems, the equivalent first-stage functionality is often provided by the EFI boot stub, an extension integrated into the Linux kernel image that transforms it into a Portable Executable (PE/COFF) format recognizable by UEFI firmware. Following the firmware's handoff, the EFI stub is loaded directly from the EFI System Partition (ESP) as an executable (e.g., renamed with a .efi extension), where it processes boot parameters like command-line options and initrd paths before invoking the kernel proper—bypassing traditional multi-stage bootloaders like GRUB or ELILO. This approach inherits similar limitations, such as no native Linux filesystem support during loading (relying on UEFI's FAT handling for the ESP), and basic error propagation if parameters are invalid, though it simplifies the chain by eliminating intermediate stages.[14]Second-Stage Bootloader and Kernel Selection
The second-stage bootloader in the Linux booting process represents a more feature-rich component that follows the initial loading of a minimal first-stage loader, enabling interactive user selection and configuration parsing for kernel handover. In systems using GRUB2, the predominant second-stage bootloader, the core image (core.img) generated by tools like grub-mkimage loads the full GRUB modules and configuration from the /boot/grub directory, providing access to advanced filesystem support such as LVM and RAID.[15] This stage parses the primary configuration file, /boot/grub/grub.cfg, which is typically auto-generated by grub-mkconfig and defines menu entries with kernel paths, parameters, and initial ramdisk specifications.[15] Upon execution, GRUB2 displays a graphical or textual menu interface allowing users to select from multiple boot entries via arrow keys, with a configurable timeout—often set to 5 seconds via the GRUB_TIMEOUT variable in /etc/default/grub—after which it defaults to the first entry if no input is provided.[15] Each menu entry, defined using the menuentry directive, specifies the kernel image (e.g., linux /vmlinuz-6.1.0 root=/dev/sda1) and optional initrd (e.g., initrd /initrd.img-6.1.0) to load into memory, along with command-line parameters like quiet splash for reduced verbosity during boot.[15] This process supports multiboot scenarios, enabling selection among Linux kernels, other operating systems, or chainloading via the chainloader command, making GRUB2 versatile for dual-boot environments.[15] In preparing the kernel for execution, the second-stage bootloader relocates the compressed kernel image (vmlinuz) and initramfs into appropriate memory regions, sets up the boot parameters according to the Linux boot protocol to pass hardware details and command-line parameters to the kernel, and invokes the boot via the boot command.[15] Historically, earlier bootloaders like LILO lacked such interactive menus and required manual reconfiguration and reinstallation to the MBR after any changes, limiting it to predefined boot options without on-the-fly selection.[16] In modern UEFI-based systems, alternatives like systemd-boot offer a simpler second-stage approach, utilizing straightforward text-based configuration files in /loader/entries/ on the EFI System Partition to define kernel paths and options, with automatic entry assembly and a minimal menu interface for selection, eschewing GRUB2's scripting complexity in favor of direct EFI executable loading.[17]Kernel Initialization
Kernel Image Decompression
Upon receiving control from the bootloader, the Linux kernel begins execution at its architecture-specific entry point, marking the start of the decompression phase. For x86 architectures, this entry is at thestartup_32 routine, located at segment offset 0x20 from the real-mode kernel header, where the processor is in real mode with interrupts disabled and segments configured appropriately.[18] The kernel image, typically in bzImage format for modern x86 systems, is a compressed payload using formats such as gzip, identified by magic bytes like 1F 8B, which the bootloader has loaded into memory starting at 0x100000 when the LOAD_HIGH bit is set in the boot protocol (version 2.00 or higher).[19] Decompression occurs using a built-in algorithm similar to gzip, inflating the kernel into its uncompressed ELF format in place, ensuring the process completes before any further initialization to avoid memory relocation issues.[19]
Following decompression, the kernel performs early setup critical for transitioning to protected mode. It initializes temporary page tables to establish an identity mapping for the initial kernel memory region, enabling paging if required for 64-bit entry points (protocol 2.12+), while 32-bit protocols start with paging disabled.[20] The processor then switches to protected mode using flat 4GB segments, with code segment __BOOT_CS at selector 0x10 and data segment __BOOT_DS at 0x18, loaded via a minimal Global Descriptor Table (GDT).[21] CPU feature detection follows, probing for extensions like SSE through CPUID instructions to configure vector units and other capabilities early in the boot sequence.[20] Additionally, the kernel sets up a stack and heap in the real-mode memory area (typically 0x8000 to 0x9ffff, avoiding the Extended BIOS Data Area), and clears the BSS section to zero via the setup_data linked list provided by the bootloader (protocol 2.09+).[22] For multi-core x86 systems, the kernel parses multiprocessor (MP) configuration tables passed through the same setup_data list, identifying CPU count and APIC configurations to prepare for symmetric multiprocessing initialization.[23]
Architecture-specific variations adapt the decompression and early setup to hardware constraints. On ARM platforms, the bootloader loads the compressed zImage at a machine-dependent TEXT_OFFSET within the first 128 MiB of RAM (recommended above 32 MiB to minimize relocation during decompression) and jumps to its first instruction, from which the kernel performs decompression, placing the uncompressed kernel in physical memory with the MMU disabled, caches off, and the CPU in SVC mode (or HYP for virtualization).[24][25] A key ARM-specific element is the passing of the Device Tree Blob (DTB), a binary description of hardware loaded at a 64-bit aligned address above 128 MiB from RAM start, with magic value 0xd00dfeed in register r2; this blob provides essential details like memory layout and device nodes before kernel execution begins.[26] In contrast, x86 relies on legacy BIOS or ACPI tables for hardware description, with no equivalent DTB but using MP tables for multi-core enumeration as noted earlier.
The kernel then parses the command line arguments passed by the bootloader, which influence early boot decisions. For x86, the command line pointer is stored in the boot_params structure at offset cmd_line_ptr (protocol 2.02+), pointing to a null-terminated string up to cmdline_size bytes (default 255, protocol 2.06+), typically located between the setup heap end and 0xA0000; arguments specify the root device (e.g., root=/dev/sda1), console output (e.g., console=ttyS0 for serial), and modules to load via parameters like module=driver_name.[27] On ARM, these are conveyed via a tagged list in registers (r1 for machine type, r2 for DTB or tags) or embedded in the DTB, with similar options for root, console, and initramfs modules, ensuring the kernel can configure devices and transition smoothly without filesystem access.[28] This parsing occurs immediately after decompression, populating global variables for subsequent initialization steps.
Initramfs Mounting and Root Transition
The initramfs, or initial RAM filesystem, serves as a temporary root filesystem loaded into memory by the bootloader alongside the kernel image, providing essential tools and drivers required to access the actual root filesystem on disk. It is typically packaged as a compressed cpio archive in the "newc" format, containing a minimal set of binaries, scripts, and kernel modules necessary for early boot tasks such as loading device drivers for storage hardware. This RAM-based environment allows the kernel to mount hardware-specific filesystems like LVM, RAID, or encrypted volumes (e.g., LUKS) before transitioning to the permanent root.[29][30] Upon kernel initialization, the initramfs is automatically extracted and mounted as the initial root filesystem in RAM, with the kernel executing the/init script as process ID 1 within this environment. The /init script, often based on BusyBox for compactness, parses kernel command-line parameters to identify the real root device (e.g., via UUID or label like /dev/sda1) and loads required kernel modules using tools like [modprobe](/page/Modprobe) for hardware support, such as SCSI drivers for disk access or network modules for remote root filesystems. It then performs device detection, assembles complex storage configurations (e.g., mounting LVM logical volumes or RAID arrays via [mdadm](/page/Mdadm)), and mounts the real root filesystem under a temporary directory, such as /sysroot, ensuring all prerequisites like filesystem checks or decryption are completed.[31][30][32]
The transition from the initramfs to the real root filesystem is achieved using the pivot_root system call, which redefines the root directory for the current process by making the new root (e.g., /sysroot) the system's root while relocating the old initramfs root to a mount point like /root/oldroot for cleanup. This syscall effectively combines the effects of chroot (changing the root directory) and umount (detaching the old root), allowing the initramfs to be unmounted and its resources freed after the switch. In implementations like Dracut, the switch_root command from the init script handles this process by moving mounts, executing the real system's /sbin/init, and cleaning up the temporary environment, ensuring a seamless handoff without disrupting ongoing processes.[33][31][32]
Tools such as Dracut and mkinitcpio are commonly used to generate the initramfs image during system installation or kernel updates, incorporating modular "hooks" to include specific components like encryption support for LUKS via cryptsetup or network boot capabilities with dhclient. Dracut, favored in distributions like Fedora and RHEL, employs an event-driven framework to dynamically assemble the archive from host system files, ensuring reproducibility and support for advanced features like Btrfs snapshots. Similarly, mkinitcpio, the default in Arch Linux, uses a preset-based configuration to build the cpio archive with hooks for filesystem assembly, allowing customization for scenarios such as encrypted or networked roots while maintaining a minimal footprint.[34][32][35]
Init System and User Space Startup
Launching the Init Process
Following the successful mounting of the real root filesystem, the Linux kernel executes the program located at/sbin/[init](/page/Init) (or an equivalent such as /lib/[systemd](/page/Systemd)/systemd in modern systems) as the first user-space process, assigning it process ID (PID) 1.[36] This transition from kernel space to user space marks the beginning of user-space initialization, with init becoming the ancestor of all subsequent processes.[37]
As PID 1, init holds critical system responsibilities, including reparenting any orphaned processes—those whose original parent has terminated—to itself, ensuring continuity of execution.[37] It also reaps zombie processes (terminated children awaiting status collection) by invoking wait() or equivalent mechanisms, thereby freeing process table entries and preventing resource exhaustion or kernel panics from an overflow of unreaped entries.[37] Additionally, init handles system signals, such as SIGTERM, which triggers orderly shutdown sequences by propagating termination to child processes.[38]
The traditional SysV init, originating from System V Unix and widely used in early Linux distributions, operates as a single-threaded process that sequentially processes its configuration file /etc/inittab to spawn essential system processes.[38] Upon startup, it parses /etc/inittab to determine the initial runlevel and executes defined actions, including launching getty instances on virtual consoles to enable user logins via terminals.[38] This sequential approach ensures predictable but relatively slow bootstrapping, as each process starts only after the previous one completes.[38]
In contrast, systemd has emerged as the modern default init system since the early 2010s, first released in 2010 and adopted as standard in major distributions like Fedora (from version 15 in 2011) and later Ubuntu and Debian.[39] Running as /lib/systemd/systemd when selected as PID 1, it replaces /etc/inittab with declarative unit files and leverages dependency graphs to parallelize service startup, significantly reducing boot times compared to SysV's linear model.[40] Systemd also integrates logging through its journald component, capturing structured logs from boot processes and services for centralized management.[40]
