Overview of Steps
- Hardware power-on.
- System firmware.
- Bootloader (3 stages).
- Kernel (initial/temporary root filesystem, and then real root filesystem).
- OS Init.
- OS GUI (get to GUI login screen).
- User GUI login (get to GUI desktop).
Detailed Steps
- Hardware power-on. Power supply sends signal to rest of hardware when power is stable.
- Firmware / controllers startup:
- Power-control module starts, holding other devices in reset state.
- If there is a "management engine" (CSME) on the board, it starts up, with its own code, and may start running a TPM which creates an audit trail (recording such things as hash value of firmware contents).
- There may be a Hardware Security Module (HSM) or Trusted Platform Module (TPM) which stores crypto keys and can execute crypto algorithms, or even custom algorithms. It has to start up.
- It's likely that CPU microcode will be loaded.
Matthew Garrett's "Booting modern Intel CPUs"
Hugo Landau's "Modern CPUs have a backstage cast" - There may be micro-controllers / boards which start up too, using their own internal processors and ROMs/NVRAMs. Keyboard controller, disk controllers, GPU, network interfaces, etc.
- CPU starts executing code (firmware) in ROM, and consulting stored values in CMOS or NVRAM.
- CPU code initializes hardware, does self-test (POST), displays manufacturer logo, etc.
- Some functions (e.g. RAID, video, hardware-encrypted disks ?) may have interactive setup functions in ROMs that have to be executed by the main CPU. So the main firmware may jump into these "Add-On ROMs" to do that processing, then control returns to the main firmware.
Igor Bogdanov's "Security features of the Intel/Windows platform secure boot process"
Bobby Borisov's AMD article
- Code checks CMOS/NVRAM settings, and looks for key-presses from user, to see what to do. Could go into setup menu, go into menu of boot devices, or step down list of boot device types and look for bootable devices.
- A boot device is found or specified.
- MBR (Master Boot Record) is read from sector 0 of boot device into RAM.
Wikipedia's "Master boot record" - The "post-MBR gap" is the disk space after MBR sector and before first partition. This is at least sectors 1-63 (31.5 KB), but more likely to be 2047 sectors (almost 1 MB) in modern disks.
- Selecting a kernel:
If firmware is Legacy BIOS (Basic Input-Output System) and disk has MBR partitioning:
- Stage-1 bootloader code (in GRUB, AKA boot.img) is in first 446 bytes of MBR (actually,
some bytes have other uses (timestamp, signature), so typical limit is 440 bytes).
[That small amount of code can do the
boot menu and a few basic commands, by calling functions in the BIOS.]
Partition table (4 entries; primary partitions) starts at byte 446 and is 64 bytes.
Then 2-byte magic number (0xAA55).
- Control jumps to start of MBR (start of stage-1 bootloader) in RAM.
- If stage-1 bootloader is for Windows, it would load a Volume Boot Record (VBR) which contains an Initial Program Loader (IPL), which would then load NT Loader (NTLDR). But we're assuming Linux and GRUB, so:
- Stage-1.5 bootloader code (AKA diskboot.img plus core.img) is in "post-MBR gap" after MBR sector and before first partition.
- Stage-1 bootloader copies stage-1.5 bootloader from post-MBR gap into RAM.
- Control jumps to start of stage-1.5 bootloader in RAM.
- Stage-1.5 bootloader finds partition in partition table that is marked as "active".
- Typical partitioning for legacy BIOS: /boot is FAT*, / is Linux-type (usually ext*, ZFS, or Btrfs), /home may be a Linux-type partition or just a directory in /.
- Stage-1.5 bootloader is big enough to understand common filesystems such as ext*, FAT, NTFS ? Or does the active partition have to be FAT* ?
- Stage-1.5 bootloader copies stage-2 bootloader files from /boot into RAM.
- Control jumps to start of stage-2 bootloader in RAM.
- Assume stage-2 bootloader is main body of GRUB (really, GRUB2).
- Which stage is smart enough to know about LVM, RAID, dm-crypt, LUKS, etc ?
- GRUB finds partition in partition table that is marked as "boot".
- GRUB reads /boot/grub/grub.cfg configuration file.
- GRUB may present a menu of kernel images and utility options, or just select a default kernel image.
Else if firmware is UEFI (Unified Extensible Firmware Interface) and disk has GPT partitioning:
- First 446 bytes of MBR are ignored (actually, some bytes of that space have other uses, so might be used). Next 64 bytes are ignored (actually, they're set to "protective values" showing one full-disk partition of a special type, so the disk looks full and strange type if someone tries to use an MBR utility against this GPT disk). Then 2-byte magic number (0xAA55).
- GPT (GUID Partition Table) is in "post-MBR gap" after MBR sector and before first partition. Sector 1 of disk is the GPT header, and has a pointer to the partition table (Partition Entry Array), which typically starts at sector 2.
- Boot parameters are in NVRAM. "efibootmgr -v" or (Windows) "bcdedit /enum FIRMWARE".
- Typical partitioning for UEFI: /boot/efi is FAT*, / is Linux-type (usually ext*, ZFS, or Btrfs), /boot and /home may be Linux-type partitions or just directories in /.
- UEFI firmware understands at least FAT12, FAT16, and FAT32 filesystems, optionally may understand more.
- One of the partitions in the GPT has a unique GUID (C12A7328-F81F-11D2-BA4B-00A0C93EC93B; systemd's "The Discoverable Partitions Specification (DPS)") that identifies it as the EFI System Partition (ESP). The filesystem in that partition is specified as FAT-like. It usually ends up mounted on /boot/efi after Linux has booted.
- UEFI firmware can launch an application (bootloader, boot manager, utility, shell, kernel)
from the filesystem in the ESP.
Applications are in
PE format, I think.
One standard EFI application is GRUB ("grub*.efi").
Another EFI application could be a direct-launch kernel (using
EFISTUB
or systemd-stub; an EFI boot stub).
Another EFI application could be a boot manager (
systemd-boot, or
rEFInd).
See "Bootloader / boot menu / boot manager" section of my "Linux Troubleshooting" page,
- UEFI firmware behaves according to settings shown by "efibootmgr -v". It may present a menu of EFI applications and utility options, or just select the only application available (if there is only one), or just select a default application, or fall back to \EFI\BOOT\BOOTX64.EFI.
- Assume "grubx64.efi" was chosen.
- If Secure Boot is enabled, verify authenticity of the EFI binaries by signatures.
Uses certificates and hashes, blacklist database (DBX) and whitelist database (DB)
and Key Exchange Key(s) (KEK) and Platform Key (PK),
Machine Owner Key (MOK) and MOK Blacklist (MOKX), Secure Boot Standard Mode-ready bootloader SHIM.
Signatures generally are provided by Microsoft.
For Linux, there is a shim binary ("shim-signed"; signed by Microsoft; contains a cert from Canonical; "apt list | grep -E '^shim[^m]'" and "efibootmgr -v") and a GRUB binary (e.g. "grub-efi-amd64-signed" or "grub-efi-arm64-signed"; signed by Canonical) to get through this process.
Secure Boot has various modes including full, fast, minimal, custom. Various paths if checks fail.
If available, TPM operates as a passive observer (creating audit trail) of all phases.
See my "Secure Boot" section. - GRUB reads configuration from ESP (EFI System Partition).
- That config file points to /boot/grub/grub.cfg
- Which stage is smart enough to know about LVM, RAID, dm-crypt, LUKS, etc ?
- GRUB reads /boot/grub/grub.cfg configuration file.
- GRUB may present a menu of kernel images and utility options, or just select a default kernel image.
- If Secure Boot is enabled, verify authenticity of the kernel image by signature etc. Initrd image is not validated.
It's possible to have Legacy BIOS firmware boot a disk that has GPT partitioning (known as "BIOS/GPT boot"), but I'm skipping that.
It's possible to have UEFI firmware boot a disk that has MBR partitioning and an ESP (identified as partition ID 0xEF), but I'm skipping that ("UEFI/MBR boot"). - Stage-1 bootloader code (in GRUB, AKA boot.img) is in first 446 bytes of MBR (actually,
some bytes have other uses (timestamp, signature), so typical limit is 440 bytes).
[That small amount of code can do the
boot menu and a few basic commands, by calling functions in the BIOS.]
Partition table (4 entries; primary partitions) starts at byte 446 and is 64 bytes.
Then 2-byte magic number (0xAA55).
- [To see what kernel command line was used to boot your current system, do "sudo dmesg | grep 'Kernel command line'" or "cat /proc/cmdline".]
- GRUB copies compressed kernel image (executable zImage or bzImage file; e.g. /boot/vmlinuz-NNN) into RAM.
- GRUB copies initrd (AKA "Initial RAM Disk") into RAM. The image can contain anything, but probably is a microcode blob followed by the initramfs ("initial RAM filesystem"), a cpio archive of kernel modules (from /boot/grub/i386-pc) including LVM and LUKS modules, encryption modules, filesystem modules, USB modules, video modules, and key programs and copies of config files (/etc/fstab, /etc/crypttab, /etc/mdadm.conf, etc), etc.
- Control jumps to start of code plus compressed kernel in RAM.
- Possible that CPU microcode could be updated at this point, using microcode compiled into the kernel image.
- Kernel sets up memory-management, floating-point, interrupts, C stack and BSS, and other low-level things.
- Transition from kernel assembly code to (mostly) C code.
- Initialize console, detect memory size, initialize keyboard, initialize video.
- Transition into protected-mode memory management.
- Transition into 64-bit mode.
- Decompress the rest of the kernel (for ARM64 and some others, it was done by GRUB).
- Kernel creates an empty initial root filesystem (rootfs) in RAM. Then files are copied to rootfs,
first from an initramfs embedded in the kernel binary during the kernel build,
then from the initrd (Initial RAM Disk) in RAM. Both
of those often are in cpio format, but many formats and variations have been used.
Maybe it's more accurate to call them "archives".
On Ubuntu 20, the initrd is almost empty, so the initramfs must have most of the files.
"man dracut"
"man update-initramfs"
"lsinitramfs /boot/initrd.img-$(uname -r) | less"
article about modifying initrd - Kernel creates scheduler process (pid 0).
- Kernel forks pid 0 to create pid 1 (init process; first user-space process), which executes /init or /sbin/init.
- Kernel forks pid 0 to create kernel thread daemon process (kthreadd, pid 2).
- Init loads modules/drivers as needed from root filesystem (rootfs). If Secure Boot is enabled, verify authenticity of the modules by signature etc.
- On Ubuntu 20, there is a CPU microcode file in initrd, so CPU microcode must be applied at this point ? I assume Secure Boot also checks that. See if your system has any "ucode" packages installed, as in "pamac search ucode".
- Initialize virtual device systems such as software RAID, LVM, ZFS, NFS.
- Init mounts real root filesystem, replacing temporary rootfs. Control passes to /sbin/init in real root fs ?
- Init may mount more filesystems ?
systemd's "The Discoverable Partitions Specification (DPS)"
Comment I wrote for someone on reddit:
"Partitions and their flags" are not the same as "where a partition will get mounted into the filesystem hierarchy after boot".
Do one of these to see the partitions and their types/flags:sudo fdisk -l /dev/sda sudo fdisk -l /dev/nvme0n1
(I may have some details wrong because my system differs from yours.) So one partition may be type "boot", and contain a FAT filesystem, and legacy BIOS firmware can read it to boot something in it. Another may be type "ESP" or "EFI System", and contain a FAT filesystem, and UEFI firmware can read it to boot something in it. After booting into Linux, 0 or 1 or 2 of those partitions may be mounted into the filesystem hierarchy. They're not really needed after booting finishes. The BIOS firmware doesn't know anything about filesystem paths such as /boot and /boot/EFI, it just knows about partitions and their flags (and FAT filesystems). - Init:
To see what kind of init you have:
"strings /sbin/init | grep -E 'init|systemd'"
"sudo init --version"
If /sbin/init is standard/old init application, init process mounts non-root filesystems as specified in /etc/fstab, then reads /etc/inittab file to find out what to do, or processes all of the /etc/init.d/* files.
If /sbin/init is a symlink to /lib/systemd/systemd, systemd process mounts non-root filesystems as specified in /etc/fstab, then uses files under /etc/systemd (including /etc/systemd/system.conf) to decide what to do. Maybe uses info from GRUB that specifies starting target under /etc/systemd/system/: default, rescue, emergency, cloud-final, graphical, more ? ("systemctl get-default" to see default.) - Init code loads kernel modules needed to handle detected devices (udev), sets up windowing system, starts getty's, sets up networking infrastructure, maybe connects to networks, starts cron, starts print server, starts audio service, etc.
- Init code could load microcode into CPU and other processors, using files in /usr/lib/firmware. "journalctl -k --grep='microcode'" to see log entries about updates.
- Eventually init code runs login daemons, different ones for each login path:
For login from console or TYY, there is getty.
For login over network, there is sshd.
And for login from keyboard/monitor, there is the display manager, which presents a login screen to the user. - See Login Process.
Ramesh Natarajan's "6 Stages of Linux Boot Process"
Narad Shrestha's "A Basic Guide to Linux Boot Process"
Sarath Pillai's "Linux Booting Process"
David Both's "An introduction to the Linux boot and startup processes"
IBM's "Inside the Linux boot process"
Wikipedia's "Linux startup process"
Arch Wiki's "Boot loader"
NeoSmart Knowledgebase's "The BIOS/MBR Boot Process"
Adam Williamson's "UEFI boot: how does that actually work, then?"
Aleksandr Goncharov's "A Journey Through the Secrets of Firmware: From BIOS/UEFI to OS"
Mumbling about computers' "Booting x86-64: from firmware to PID1"
Debian Reference's "Chapter 3. The system initialization"
Debian Wiki's "CategoryBootProcess"
linux-insides / Kernel Boot Process
"man bootup"
Wikipedia's "GUID Partition Table"
Wikipedia's "EFI system partition"
Arch Wiki's "EFI system partition"
Pid Eins's "The Wondrous World of Discoverable GPT Disk Images"
OSDev Wiki's "UEFI"
Ubuntu Wiki's "UEFI / SecureBoot"
NSA's "Boot Security Modes and Recommendations"
"sudo mokutil --sb-state" [NOT SURE WHAT THIS DOES]
noahbliss / mortar
Drive-Trust-Alliance / sedutil
Chris Hoffman's "How to Check If Your Computer Has a TPM"
Dell's "How to troubleshoot and resolve common issues with TPM and BitLocker"
Shawn Brink's "How to Check if Windows PC has a TPM"
Michael Altfield's "Trusted Boot"
Linux: "sudo apt install tpm-tools" and "man -k tpm" and "sudo dmesg | grep -i tpm" and "lsmod | grep tpm"
Igor's Blog's "In-depth dive into the security features of the Intel/Windows platform secure boot process"
When booting, hold down Shift key to get into GRUB menu.
GNU GRUB
GNU GRUB Manual (or do CLI "info grub")
From someone on reddit:
"GRUB disaster recovery: only need to know two commands to use in GRUB shell: ls to find things, and configfile to get GRUB to load the right grub.cfg that the distro created that knows all the root filesystem uuids and other magic needed for booting. Those two commands, tab-completion, and understanding the (hdX,msdosY) device/partition syntax are enough."
Rob Day's "initrd and initramfs"
Wikipedia's "Initial ramdisk"
Boot Software
There are two functions:
- Boot menu: shows a list of kernels or applications.
- Bootloader: copies one of them into memory and transfers control to it.
Legacy BIOS firmware just has a boot device menu, and then jumps to a boot menu / boot manager / bootloader on that device. The user chooses a kernel or application in that menu, and then the bootloader part loads it and transfers control to it. Such boot menus / boot managers / bootloaders include GRUB, LILO.
UEFI firmware contains a boot menu, and can launch an application (boot manager, utility, shell, kernel) from the filesystem in the ESP. One standard EFI application is MokManager ("mm*.efi"), which manages keys. Another EFI application is GRUB ("grub*.efi"). Another EFI application could be a direct-launch kernel (using EFISTUB or systemd-stub; an EFI boot stub). Another EFI application could be a boot manager ( systemd-boot, rEFInd, ELILO). Another application could be a Compatibility Support Module (CSM) application to do a legacy BIOS boot using the stage-1 bootloader in the MBR.
From Adam Williamson's "UEFI boot: how does that actually work, then?":
All a BIOS firmware knows, in the context of booting the system, is what disks the system contains. You, the owner of this BIOS-based computer, can tell the BIOS firmware which disk you want it to boot the system from. The firmware has no knowledge of anything beyond that. It executes the bootloader it finds in the MBR of the specified disk, and that's it. ...
In the BIOS world, absolutely all forms of multi-booting are handled above the firmware layer. The firmware layer doesn't really know what a bootloader is, or what an operating system is. Hell, it doesn't know what a partition is. All it can do is run the boot loader from a disk's MBR. ...
...
UEFI provides much more infrastructure at the firmware level for handling system boot. It's nowhere near as simple as BIOS. Unlike BIOS, UEFI certainly does understand, to varying degrees, the concepts of 'disk partitions' and 'bootloaders' and 'operating systems'. ... The UEFI spec defines an executable format and requires all UEFI firmwares be capable of executing code in this format. ... The GUID Partition Table format is very much tied in with the UEFI specification ... The UEFI spec requires that compliant firmwares be capable of reading variants of the FAT format ... you can think of the UEFI boot manager as being a boot menu ... add an entry to the UEFI boot manager configuration with a name and the location of the bootloader (in EFI executable format) that is intended for loading that operating system. ... Linux distributions use the efibootmgr tool to deal with the UEFI boot manager.
Apparently there are two kinds of kernel image ?
- Unified Kernel Image (UKI): parameters are compiled-in. EFI-only. ArchWiki
- "Classic" kernel image: some parameters are given on command-line, and the command-line can be edited in the boot menu.
Common bootloaders: GRUB, systemd-boot, EFISTUB, rEFInd, Slim Bootloader.
Arch Wiki's "Boot loader - Feature comparison"
See current state:
To see bootable partitions other than the current one: "sudo os-prober"
To see kernels bootable from a partition: "sudo linux-boot-prober /dev/sdaN"
In Windows, to see/edit bootloader list, run "bcdedit" as administrator. article1, article2
In Linux, UEFI (if you can boot):
efibootmgr -v
sudo bootctl status | less
sudo fwupdtool security --force # security state of system
sudo fwupdtool esp-list
sudo find /boot -name 'grub*' -print # look for GRUB
sudo bootctl is-installed # test for systemd-boot installed
Fedora:
Use grubby, it's the default ?
GoLinuxCloud's "How to update GRUB2 using grub2-editenv and grubby in RHEL 8 Linux"
Fedora User Docs' "Working with the GRUB 2 Boot Loader"
cat /etc/default/grub
grubby --info=ALL
grubby --default-kernel
Fedora UEFI:
cat /boot/efi/EFI/fedora/grub.cfg
grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
Xiao Guoan's "Use Linux efibootmgr Command to Manage UEFI Boot Menu"
Kernelstub to edit kernel command line options.
Bootsplash: screen where LUKS password to decrypt disk (or just boot partition ?) is entered:
"man plymouth"
Also Splashy, RHGB, XSplash, more.
Egidio Docile's "How to disable Plymouth on Linux"
Proposals / new, I think:
systemd's "The Boot Loader Specification"
Pid Eins's "Brave New Trusted Boot World"
GRUB
Use GRUB:
Look for GRUB: "sudo find /boot -name 'grub*' -print"
GNU GRUB Manual 2.04
Dedoimedo's "GRUB 2 bootloader - Full tutorial"
A downstream source repo
GRUB designates storage drives as hd0, hd1, hd2, etc.
Partitions are (legacy BIOS) msdos1, msdos2, msdos3, etc,
or (UEFI) gpt1, gpt2, gpt3, etc.
Linux designates drives (HDD) as /dev/sda, /dev/sdb, /dev/sdc, etc,
or (SSD) /dev/nvme0n1, /dev/nvme0n2, /dev/nvme0n3, etc.
Partitions are (HDD) /dev/sda1, /dev/sda2, /dev/sda3, etc,
or (SSD) /dev/nvme0n1p1, /dev/nvme0n1p2, /dev/nvme0n1p3, etc.
SSD controllers are /dev/nvme0, /dev/nvme1, /dev/nvme2, etc.
grub-btrfs includes ability to boot from Btrfs snapshots.
Using GRUB menu:
If your GRUB-menu is hidden, hold Shift when booting, or run "sudo grub2-editenv - unset menu_auto_hide", or edit /etc/default/grub to comment out "GRUB_TIMEOUT_STYLE=hidden" line.
At EFI menu, press Esc or "c" for GRUB.
Get "grub> " prompt.
"help" to get help.
"help testspeed" to get help about that command.
Has ZFS commands.
But output of "help" zooms past and 80% of it can't be seen.
(To add paging: "set pager=1")
Type "exit" to get out and go to BIOS.
Type "normal" to get out and boot current kernel.
(Edit /etc/default/grub, add pager=1, run update-grub But doesn't work.)
pgrz's "Launching system from GRUB2 console"
GNU GRUB Manual's "command-line and menu entry commands"
GRUB themes: Run "grub-customizer", click "Appearance settings" tab, click a choice in "Theme" pull-down, click :"Save". Available themes are stored in /boot/grub/themes. Get more from: package manager search "grub2-theme*"; article1; GitHub search; KDE Store; GNOME-Look.
Repair GRUB:
Bootable repair images for GRUB:
Rescatux & Super Grub2 Disk
Boot-Repair-Disk
Ubuntu's "Boot-Repair"
You may have to disable Secure Boot to run some of these.
GRUB Rescue:
After BIOS password, press Esc to get into GRUB Rescue.
Chris Titus's "Grub Rescue | Repairing your Bootloader"
Boot-Repair:
Kiran Kumar's "'Boot Repair' for Ubuntu, Linux Mint, and elementary OS can fix Bootloader issues"
Chris Hoffman's "How to Repair GRUB2 When Ubuntu Won't Boot"
Boot Repair
Vivek's "How to Edit & Repair GRUB Boot Menu Using Commands"
Matt Callahan's "error: no such partition. Entering rescue mode ..."
System 76's "Repair the Bootloader" (Pop!_OS)
Various:
Jahid Onik's "How To Repair the GRUB Bootloader"
Jahid Onik's "How to Boot into Rescue Mode or Emergency Mode in Ubuntu"
Also see: Help section in GParted app.
Change GRUB Configuration:
First, edit "/etc/default/grub".
Then:
sudo update-grub2 # for Debian or Ubuntu*
sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # for Fedora
sudo grub-mkconfig -o /boot/grub/grub.cfg # for Arch family ?
sudo update-bootloader # for openSUSE ?
sudo grubby # for Red Hat ?
Or: (legacy BIOS only ?) run GUI app "grub-customizer", click on Advanced Settings, and do GRUB_CMDLINE_LINUX_DEFAULT changes.
Korbin Brown's "How to set kernel boot parameters on Linux"
Red Hat's "Configuring kernel command-line parameters"
Add memtest to GRUB:
[Legacy BIOS:]
Should see memtest in GRUB menu.
[UEFI:]
Bootable USB image: Memtest86, or
Add to GRUB menu: Matias's "Installing memtest86 on UEFI Grub2 Ubuntu"
(my /boot/efi is nvme0n1p1, so I used "set root='hd0,gpt1'". Then: "Change GRUB Configuration" section.
Memtest86+ is a different (and open-source) thing: Memtest86+
Karim Buzdar's "How to Run Memtest in Ubuntu 20.04"
From someone on reddit:
The Linux kernel has a built-in memtest which you can activate with memtest=17 kernel
parameter (requires CONFIG_MEMTEST=y in .config /proc/config.gz). It tests the RAM
with random pattern, single bit pattern, and finally zero. and any bad RAM it finds
is automatically reserved and won't be used any more.
Force fsck in GRUB:
Permanently:
Add kernel parameters "fsck.mode=force fsck.repair=yes".
See "Change GRUB Configuration" section
One-time check:
Press Esc key while booting to get into grub menu.
In grub menu, highlight first item "Linux Mint 19 Cinnamon" and press "e" key to edit the script for that menu item. Find line something like "linux ... ro quiet ..." and add a "fsck.mode=force" after the "ro quiet". Hit F10 to boot. After OS starts, a csd_housekeeping process will check disk, taking a couple of minutes. Results where ? Change to the boot script is not persistent; it will be gone next time you restart.
systemd-boot
Use systemd-boot:
"[Compared to systemd-boot,] GRUB has more features and it also works with legacy boot."
# MAY have to mount ESP partition to /efi or /boot/efi first:
sudo mkdir /efi
blkid | grep fat
sudo mount /dev/nvme0n1p1 /efi
sudo bootctl is-installed # test for systemd-boot installed
sudo bootctl status
sudo bootctl list
sudo find /boot -name '*.conf' -print
sudo less /boot/efi/loader/entries/NAME-current.conf
If your boot-menu is hidden, hold Shift when booting.
Kowalski7cc's "Systemd-boot install on Fedora"
dalto's "[EndeavourOS] Convert to systemd-boot"
Peter Confidential's "Arch Linux - How to migrate from grub to systemd-boot"
ArchWiki's "systemd-boot"
Repair systemd-boot:
Force fsck in systemd-boot:
[Really about making any change to the kernel command line.]
Edit command-line in config file:
# MAY have to mount ESP partition to /efi or /boot/efi first:
sudo mkdir /efi
blkid | grep fat
sudo mount /dev/nvme0n1p1 /efi
sudo find /boot -name '*.conf' -print
sudo less /boot/efi/loader/entries/NAME-current.conf
From someone on reddit:
If you just want to customise basic things like adding and removing entries, setting a timeout etc,
systemd-boot is really simple and easy - just edit /boot/loader/loader.conf and add entries in
/boot/loader/entries/. These files are so simple that you really don't need a tool to edit them.
If you want the stuff that systemd-boot can't do, like themes and whatever, then you can very easily disable systemd-boot with "bootctl remove". And then just install Grub as normal. Install the grub package and then use "grub-install" (with some options) to install it as boot loader. https://wiki.archlinux.org/title/GRUB#Installation covers that pretty well.
If you want the stuff that systemd-boot can't do, like themes and whatever, then you can very easily disable systemd-boot with "bootctl remove". And then just install Grub as normal. Install the grub package and then use "grub-install" (with some options) to install it as boot loader. https://wiki.archlinux.org/title/GRUB#Installation covers that pretty well.
Other Software
EFISTUB, rEFInd.
Arch Wiki's "Boot loader - Feature comparison"
Miscellaneous
Wikipedia's "Booting"