Disk management



Layers

+/-
  1. GUI apps to manage multiple layers: GNOME Disks, GParted, VeraCrypt, Stratis (XFS-only), etc.

  2. Linux standard organization of files and directories in a system: /, /bin, /etc, /etc/passwd, /usr, and so on.
    Chris Hoffman's "The Linux Directory Structure, Explained"


  3. Linux OS API to a filesystem: inodes which represent files (some of which are directories), operations such as read/write/createfile/deletefile/makedir/rmdir.
    Wikipedia's "File system"

    Format on disk (of ext* filesystem):
    - Boot block ?
    - Superblock (maybe run "sudo dumpe2fs -h /dev/sda1 | less" to see info).
    - Inode (index node) table. (Each inode points to data blocks)
    - Dentries (translate between names and inode numbers, and maintain relationships between directories and files).
    - Data storage blocks (actual contents of files and directories).
    Diagram from Simone Demblon and Sebastian Spitzner's "Linux Internals - Filesystems"
    M. Tim Jones' "Anatomy of the Linux file system"
    SoByte's "Linux File System"
    Kernel.org's "ext4 Data Structures and Algorithms - High Level Design - Special inodes"


  4. Plaintext or individually encrypted files: e.g. normal files and directories; app-encrypted files such as password manager databases or encrypted SQL databases.

  5. Filesystem mounted instances: mount-point name, and type, and device name.
    e.g. "/" is the mount location of an ext4 filesystem stored on /dev/sda5.
    Run "df -khT" to see the mounted filesystems.




[I'm told layers 6 through 10 really can be mixed in any order, you can stack any block device on top of any other. Maybe consider following to be a typical order.]

  1. Upper (stacked) filesystem formats: eCryptfs; EncFS; gocryptfs; Windows' Encrypting File System (EFS); AVFS.

    Wikipedia's "ECryptfs"
    SK's "How To Encrypt Directories With eCryptfs In Linux"
    I'm told the eCryptfs code is considered old and unmaintained, so Ubuntu has dropped that option.
    Wikipedia's "Encrypting File System" (EFS)


  2. Base filesystem formats: format of data stored inside a partition. E.g. ext4, fat32, NTFS, Btrfs, ZFS.
    Jim Salter's "Understanding Linux filesystems: ext4 and beyond"
    ArchWiki's "File systems"

    ext4 can have a file/directory encryption module added to it: fscrypt (article1, article2).

    Scan filesystem and map bad blocks to "don't use" inode (ext* filesystem only): "e2fsck -c".

    Xiao Guoan's "How to Fix 'can't read superblock' Error on Linux (ext4 & Btrfs)"


  3. Manager: e.g. Linux's LVM (Logical Volume Manager), or software RAID, or ZFS, or Btrfs.

    Jesse Smith's "Combining the storage space of multiple disks"
    Karthick's "Linux Logical Volume Manager (LVM) Guide"
    David Both's "A Linux user's guide to Logical Volume Management"

    "Device mapper" is a framework that things such as LVM and dm-verity and dm-crypt talk to.
    "sudo dmsetup info"
    Wikipedia's "Device mapper"

    LVM is oriented toward providing flexibility, while RAID is oriented toward providing reliability.

    I think LVM can be used in two opposite ways:
    +/-
    • [On a large system:] Present a single "virtual partition" to the layer above, but the data is stored across multiple physical partitions and devices.

    • [On a small/old single-disk MBR system limited to 4 physical partitions:] Present multiple "virtual partitions" to the layer above, which can use them for swap, /, and /home, but the data is stored in a single physical "extended partition" on disk.


    Ubuntu Wiki's "LVM"
    Wikipedia's "Logical volume management"
    Wikipedia's "Logical Volume Manager (Linux)"
    Sarath Pillai's "The Advanced Guide to LVM"
    Kenneth Aaron's "Install Linux with LVM"
    terminalblues' "LVM Lab Setup With VirtualBox"
    With LVM, see partition type "lvm" in lsblk.
    LVM concepts, from lowest: PV (Physical Volume), then VG (Volume Group), then LV (Logical Volume).
    Corresponding LVM commands: "sudo pvs --all", "sudo vgs --all", "sudo lvs --all".
    "sudo lvm fullreport"
    Example corresponding LVM names: /dev/sda6, vgubuntu-mate, root.

    Heard on a podcast: LVM can do snapshots, but they kill performance.

    LVM can use a small fast drive as a cache for a large slow drive; see "man lvmcache".

    "Linux does not allow manipulation of the root file system if it's in use (even with LVM). To resize an LVM Root partition, you must boot into a live disk." (May refer to partition boundaries, and certain filesystem types. For example you might be able to shrink a Btrfs root filesystem while it's in use.)

    Software RAID: "mdadm" command. LVM also has RAID capabilities.
    Marius Ducea's "Mdadm Cheat Sheet"
    Thomas-Krenn's "Mdadm recovery and resync"
    Andy's "Debian-installer, mdadm configuration and the Bad Blocks Controversy"
    Chris Siebenmann's "Getting the names of your Linux software RAID devices to stick"
    "sudo mdadm --detail-platform"
    "sudo ls /proc/mdstat"
    Warning: if a drive drops out of the array with errors, and you add it back in and it immediately says "OK", do not trust this. Force a full scrub/read on the RAID.


  4. Device-mapper and full-volume/block-level encryption: e.g. dm-crypt (a LUKS-compliant implementation); VeraCrypt "full-disk" (really, full-partition) encryption; BitLocker, Loop-AES.

    Wikipedia's "dm-crypt"
    Wikipedia's "Linux Unified Key Setup" (LUKS)
    ArchWiki's "dm-crypt / Encrypting an entire system"
    Wikipedia's "VeraCrypt"
    Loop-AES README

    BitLocker

    +/-
    Wikipedia's "BitLocker"

    Dislocker (might be in distro's repo)
    "man libbde-utils" and "bdemount"
    Hasleo's BitLocker Anywhere For Linux
    Versions 2.3.0+ of cryptsetup support Bitlocker [but not Encrypt-On-Write conversion mode].

    Get recovery key:
    From Microsoft account (Devices / View details / Bitlocker data protection / Manage recovery keys).
    Or (on Windows, run as Administrator) "manage-bde -protectors -get c:" (recovery key is shown as "password").
    Also could get keyfile (.bek file) ? Don't know how.
    Microsoft's "manage-bde"
    See encryption status: run as admin "manage-bde -status".

    With dm-crypt / LUKS, see partition type "crypt" in lsblk.
    With VeraCrypt, see partition type "dm" in lsblk.

    Device mapping also can be used to implement other things, such as:
    block integrity-checking: dm-integrity

    Network Block Device (NBD):
    nbdkit

    Some software (LUKS) has a recognizable header on the encrypted volume; others (VeraCrypt, Loop-AES) do not. Not having a header may make it harder to attack, and may give plausible deniability. [But with LUKS see Shufflecake.]


  5. Physical partitions: e.g. /dev/sda5, /dev/sdb1, /dev/nvme0n1p3.
    And a partition table (Master Boot Record (MBR, sometimes called DOS partition table) or GUID Partition Table (GPT)) to list the partitions.

    "hwinfo --short --block"
    "lsblk"
    Volume and partition IDs are different:
    "lsblk --output NAME,PATH,MOUNTPOINTS,UUID,PARTLABEL,PARTUUID,TYPE"

    From someone on StackExchange:
    "Volume implies formatting and partition does not. A partition is just any continuous set of storage sectors listed in some table (e.g. MBR or GPT). A volume is a set of sectors belonging to the same filesystem, i.e. an implemented filesystem.".

    A key thing I would put at this level: for container files, command "losetup" makes a regular file appear as a block device (misleadingly named as a "loop" device). Try "losetup --list".

    To see device block size (but it may lie to you): "sudo blockdev --getbsz /dev/sdaN"
    "cat /sys/block/sda/queue/logical_block_size"
    "cat /sys/block/sda/queue/physical_block_size"
    "udevadm info -a -n /dev/nvme0n1p2"

    To see performance stats every N seconds:
    "iostat -xz 10" (from sysstat package)
    "watch -n 1 iostat -xy --human 1 1"

    Sergio Prado's "RPMB, a secret place inside the eMMC"


  6. Physical devices: e.g. /dev/sda, /dev/nvme0n1.


    "sudo tlp-stat --disk"





  1. Disk hardware striping/mirroring (if any). E.g. hardware RAID.
    Wikipedia's "RAID"
    Wikipedia's "RAID-Z" (with ZFS)

    Note: each type of controller probably has its own custom format for data on disk. So if your hardware dies, you have to replace with identical hardware, you can't just attach the disks to any random system.

  2. Intermediate/bridge physical interfaces to media: e.g. USB, SCSI, RAID controller.
    Try doing "cat /proc/scsi/scsi" and "lsusb" and "udevadm info -a -n /dev/nvme0n1p2".

  3. Physical interfaces to media: e.g. IDE, SCSI, SATA.
    Wikipedia's "Hard disk drive interface"

  4. Disk controller and firmware.

    Some drives support eDrive or Opal encryption (AKA "self-encrypting drive"; SED).
    Generally the system BIOS has to support using it.
    ArchWiki's "Self-encrypting drives"

    Most/all SSDs do block-remapping at the firmware level, to do wear-leveling.

  5. Raw media: e.g. spinning hard disk, SSD, flash drive (MMC or UFS), SD card, CD-ROM, DVD-ROM, floppy disk.
    Appears as e.g. /dev/sda, /dev/sdb, /dev/nvme0n1 (SSD), /dev/vda (in a VM), /dev/mmcblk0 (MMC device).
    "udevadm info /dev/nvme0n1"
    "lsblk --json -O /dev/nvme0n1p1"
    "udisksctl info -b /dev/nvme0n1"
    "udisksctl info -b /dev/nvme0n1p1"
    Use GParted View/DeviceInformation to see info.
    Use "badblocks" (not on SSD) to test for bad blocks on a raw empty partition.

Thomas Krenn's "Diagram of Linux Storage Stack"
Thomas Krenn's "Diagram of Linux I/O Stack"



Example 1, my Linux Mint 19.2 system:
+/-

$ df -hT
Filesystem             Type      Size  Used Avail Use% Mounted on
/dev/sda5              ext4       33G   25G  7.0G  78% /
/dev/sda6              ext4      259G  182G   65G  74% /home
/dev/sda1              ext4      945M  175M  705M  20% /boot
/home/user1/.Private   ecryptfs  259G  182G   65G  74% /home/user1
  1. Standard Linux organization, except my personal files under /home.
  2. Standard Linux filesystem API.
  3. My password manager database file "KeePassDatabase.kdbx" is app-encrypted.
  4. It is under "/home/user1", which is the mount location of an eCryptfs filesystem stored on "/home/user1/.Private".
  5. "/home/user1/.Private" is using upper filesystem format eCryptfs.
  6. The base filesystem format of "/home" is ext4.
  7. Manager: none; not using LVM or RAID.
  8. Device-mapper and full-volume/block-level encryption: none ?
  9. Physical partitions: /home is on /dev/sda6.
    Partition table on /dev/sda is a Master Boot Record (MBR) table.
  10. Disk hardware striping/mirroring: none.
  11. Intermediate/bridge physical interfaces to media: SCSI.
  12. Physical interface to media: SATAII / Serial-ATA/300.
  13. Disk controller: whatever is on the circuit board attached to the disk (probably mainly some FPGA chip); no hardware encryption.
  14. Raw media: Western Digital model "ATA WDC WD3200BEVT-7" spinning hard disk, 298 GiB (320 GB), 5400 RPM 2.5" diameter, appears as /dev/sda.

Example 2, a VeraCrypt container mounted in my Linux Mint 19.2 system:
+/-

$ df -ahT
Filesystem             Type             Size  Used Avail Use% Mounted on
/dev/sda6              ext4      259G  182G   65G  74% /home
/home/user1/.Private   ecryptfs  259G  182G   65G  74% /home/user1
/dev/mapper/veracrypt1 ext4             2.0G  1.1G  750M  60% /media/veracrypt1
  1. Standard Linux organization, except my personal files under /home.
  2. Standard Linux filesystem API.
  3. Plaintext file "MyBankInfo.txt" is in a 2.0GB VeraCrypt container on /home/user1.
  4. It is under "/media/veracrypt1", which is the mount location of an ext4 filesystem stored on "/dev/mapper/veracrypt1".
  5. "/dev/mapper/veracrypt1" is using upper filesystem format ???.
    Both VeraCrypt and eCryptfs are in here somewhere.
  6. The base filesystem format of "/dev/mapper/veracrypt1" is ext4 ?
  7. Manager: none; not using LVM or RAID.
  8. Device-mapper and full-volume/block-level encryption: none ?
  9. Physical partitions: /home is on /dev/sda6.
    Partition table on /dev/sda is a Master Boot Record (MBR) table.
  10. Disk hardware striping/mirroring: none.
  11. Intermediate/bridge physical interfaces to media: SCSI.
  12. Physical interface to media: SATAII / Serial-ATA/300.
  13. Disk controller: whatever is on the circuit board attached to the disk (probably mainly some FPGA chip); no hardware encryption.
  14. Raw media: Western Digital model "ATA WDC WD3200BEVT-7" spinning hard disk, 298 GiB (320 GB), 5400 RPM 2.5" diameter, appears as /dev/sda.



From someone on reddit:
+/-
My view on it is that there are no layers. There are just different combinations, abstractions, attachments, slices and mirrors of block devices. Upon which you can either build other block devices, or store raw data which could include filesystems.

...

The root of it is that the Linux block device is the base unit and since the other entities present block devices as their product, it gets confusing since the system is making block devices from other block devices and parts of block devices.

...

The first two items in #5 [VeraCrypt containers; eCryptfs] are special types of filesystems, but the 3rd thing [Windows' Encrypting File System (EFS)] is referring to something that becomes a block device. Once it is a block device, then it can be used wherever a block device is used.

#6 is talking about filesystems and "partitions". But it's only a partition if it is referred to in a partition table (GPT, MBR, Sun, SGI, BSD). And even then, the OS only sees that data through the lens of a block device. See "man fdisk".

Trying to represent this as layers breaks pretty fast. For example with LVM, the LV is in a VG. And a VG encompasses one more more PVs. An LV can be spread across multiple PVs.

As I say, in the end actual data is on storage that shows up in Linux as a block device. http://www.haifux.org/lectures/86-sil/kernel-modules-drivers/node10.html

> [me trying to defend layers:]
> For example, can a VeraCrypt container be below (contain) a LVM
> volume ? I don't think so, but maybe I'm wrong.

In Linux, VeraCrypt can encrypt a file. That file can contain general data, a filesystem, or a partition table that divides up the file into partitions.

Also as a file, you can attach it to a loop device and then you can use that as an LVM PV (physical Volume) -- the first bullet here: https://www.veracrypt.fr/en/Home.html



Partitions: "cat /proc/partitions"

"sudo blkid" is the best way to see what type a filesystem is if you're using non-standard filesystems. In output of blkid, exFAT displays as 'SEC_TYPE="msdos" TYPE="vfat"'; NTFS displays as 'TYPE="ntfs" PTTYPE="dos"'. Most other commands show them as "fuseblk" or no-type.

"mount | column --table" or "findmnt -A" may show an amazing number of mounted filesystems, including snaps, tmpfs's, cgroups, mappers.

Vivek Gite's "Linux Hard Disk Encryption With LUKS [ cryptsetup encrypt command ]"
Beencrypted's "How To Encrypt Disk In Linux Securely"
ArchWiki's "Disk encryption"
"man cryptsetup"

There is another mechanism that lets a non-sudo user mount LUKS volumes:
"man cryptmount", "man cryptsetup-mount", "man cmtab", "cat /etc/cryptmount/cmtab"

ZFS is a somewhat-new-on-Linux [in 2020] system that integrates several layers (logical volume manager, RAID system, and filesystem) into a unit, and includes features such as checksums and snapshots and copy-on-write. Mostly oriented toward server/enterprise. And it requires letting ZFS manage the entire disk; you can't put ZFS in just one partition.

Magesh Maruthamuthu's "13 Methods to [Identify] the File System Type on Linux"



If you boot from USB, how to mount LVM/LUKS hard disk

+/-
random neuron misfires' "HOWTO mount an external, encrypted LUKS volume under Linux"
Vivek Gite's "Linux mount an LVM volume / partition"


apt list | grep lvm2/ | grep installed
# if not:
sudo apt install lvm2

lsmod | grep dm_crypt
# if not:
sudo modprobe dm-mod

# encrypted LUKS volume contains an encrypted LVM

# do LUKS
lsblk
cryptsetup luksOpen /dev/sda6 VGNAME  # arbitrary GROUPNAME "VGNAME"
# give passphrase
stat /dev/mapper/VGNAME
sudo mkdir -p /mnt/VGNAME

mount /dev/mapper/VGNAME /mnt/VGNAME/
# should get "mount: unknown filesystem type 'LVM2_member'"

# do LVM
sudo vgscan               # find info about LVM devices
# should see LV == "VGNAME", VG == something like "vgubuntu-mate"
sudo vgchange -a y VGNAME   # activate LVM group "VGNAME"
sudo lvdisplay
sudo lvs
# see VGPATH something like "/dev/dm-2"
ls -l /dev/vgubuntu-mate/   # suppose it has devices home, root, swap
sudo mkdir -vp /mnt/VGNAME/{root,home} # create mount points
sudo mount /dev/dm-2 /mnt/VGNAME/root

df -T | grep VGNAME
ls -l /mnt/VGNAME/root

sudo umount /dev/dm-2
sudo vgchange -a n VGNAME   # de-activate LVM group "VGNAME"



Vivek Gite's "How To List Disk Partitions"



Pilot from Airplane sweating




Filesystem Types



Wikipedia's "File system"
Wikipedia's "Comparison of file systems"

As far as I know, the only common Linux local filesystems that do check-summing to fight "bit rot" (failed sectors on disk) are ZFS and Btrfs. Check-summing does not protect you from data loss, but it prevents such data loss from going undetected. If you want to repair the errors without losing data, you need to be using parity or some forms of RAID.

Some things that are not filesystems: partition table, boot, swap, LVM group.




# List filesystem types available to load:
ls /lib/modules/$(uname -r)/kernel/fs

# List filesystem types currently loaded:
cat /proc/filesystems

# Can't list all available FUSE filesystem types;
# any app could make a type available at any time.

# List FUSE filesystem types currently loaded:
mount | grep 'fuse\.' | cut -d ' ' -f 5 | uniq

# Descriptions of filesystem types:
man fs





Michael Larabel's "XFS / EXT4 / Btrfs / F2FS / NILFS2 Performance On Linux 5.8"

My opinion: for RAID or many-big-disks, use ZFS. For one- or two-disk systems, and on backup drives, use Btrfs.

Huge generalization, but apparently for performance on SSD: ext4 is best, Btrfs in middle, ZFS worst.



From someone on reddit 8/2020:
+/-
It's worth mentioning that while btrfs and zfs both have features that make snapshots easier to take (and a bunch of other awesome features), ext4 is actually more resilient. When ext4 breaks, it's almost always fixable without having to reformat. When zfs and btrfs break, the data is usually recoverable, but it can sometimes require a reformat to get the drive operational again.

Source: I do data recovery professionally.

...

[I asked why: better journaling, tools, what ?]

I'm not 100% sure why ... When it comes to repair, I use the same tools as everyone else. And ext4's pretty much always work (which isn't the case for any filesystem on any OS, from what I can tell). I think ext4 being developed and patched by so many more people, for so many years as the default for pretty much all of Linux, and as a result of its ubiquitousness, has resulted in a rock-solid piece of technology that's almost impossible to break.

It's worth noting that NTFS on Windows can be broken beyond repair, and we regularly see that. As can Apple's filesystems, APFS and HFS+ (HFS+ was actually surprisingly fragile).

...

I'm talking about worst-case scenarios anyway. As someone who does data recovery professionally, nobody calls me when things are going well, lol. Btrfs is a fantastic, and almost always stable, filesystem. ZFS even more so (just because it's a more mature code-base).

I also install, configure, and maintain Linux systems professionally (both for standard desktop users and servers). And the majority of filesystem errors on all common Linux filesystems are repairable, even when caused by power outages or hardware failure (which are the worst-case scenarios for a filesystem). My comments were mostly meant to highlight the bulletproof nature of ext4, not to call out the next generation filesystems.

...

[Guessing from a comment by someone else: Advanced filesystems such as Btrfs and ZFS may be totally fine and repairable if you use them as simple filesystems. Perhaps it's when you start using them "full stack" and RAID and such and then have a hardware failure that you can get into rare irreparable situations.]



Hard link: two or more directory entries (filenames) contain same inode number. Made with "ln". All entries have equal status, and one can be deleted without affecting others. All entries must be on same filesystem as the file. Can't hard-link to a directory.

Symbolic / soft link: a special file whose contents are the name of another file. Made with "ln -s". If the real file is deleted, the link becomes dangling. Can link to a file on another filesystem. Can symbolic-link to a directory, or to another symbolic link.

Bind mount a mount of one existing directory to appear also at another path. Made with "mount -o bind /existing /new". Can link to a directory on another filesystem. Transient; is not stored in the filesystem, only in the current mount table.

# Show bind mounts:
awk '$4 != "/" {printf("%-20s -> %-20s\n",$4,$5)}' /proc/self/mountinfo 


Tookmund's "The Unexpected Importance of the Trailing Slash"





Encryption of data at rest



Things you could encrypt

+/-
  • A single file (maybe use 7zip, GnuPG, ccrypt).

  • An archive of files (maybe use 7zip, rar).

  • A directory tree (maybe use eCryptfs, gocryptfs, zuluCrypt, Plasma's Vault).

  • A container file that you then mount as a data disk (LUKS, VeraCrypt, zuluCrypt).

  • A disk partition (maybe use LUKS, VeraCrypt, zuluCrypt).

  • An entire data disk (maybe use LUKS, VeraCrypt, zuluCrypt).

    Danger: if you attach an encrypted disk to Windows, and Windows doesn't recognize the encryption format, it will assume it's a new unformatted disk and ask if you want to format it. Clippy want to format hard disk

  • Most of a boot/system disk (LVM/LUKS on Linux, or VeraCrypt on Windows).

  • An entire boot/system disk (hardware encryption).




How is the plaintext data presented to you ?

+/-
  • A single file. (GnuPG, ccrypt)

  • A directory tree from a mountable filesystem, perhaps encompassing everything you see from "/" on down, or everything from your home directory on down, or other places. (VeraCrypt, zuluCrypt, LUKS, eCryptfs, gocryptfs, AVFS)

  • Files inside an archive manager application, and you can extract them to the normal directories. (7zip, Engrampa, Archive Manager)




My strategy

+/-
For critical security software, I want: open-source, standard part of stock OS, widely used, lots of devs, simpler.

So for system/boot disk encryption, I use whatever is built into the OS. Usually LVM/LUKS.

For data disks and containers, I had been using VeraCrypt. But I got a little spooked about the TrueCrypt devs being anonymous, VeraCrypt's license status, and I wonder how many devs are working on VC. It's not "simple" in that it has a lot of features I don't need: cross-platform, hidden volumes, encrypted boot disk (on Windows), in-place encryption, many cipher algorithms.

I used to think cross-platform was important to me, but I changed my mind. In an emergency, I can boot from a Linux live-session on a USB stick, and copy needed files to a non-encrypted filesystem.

Then I found that LUKS can do container files as well as partitions, so I switched to LUKS for everything. It seems VeraCrypt is mostly a GUI on top of code similar to LUKS1: all the same features are there in LUKS. And in fact VeraCrypt seems to be sticking with older settings (LUKS1) to preserve compatibility with old software. But using LUKS directly, I have LUKS2. And people are telling me that LUKS2 uses more secure algorithms.

On Ubuntu, software automatically detects a LUKS-encrypted disk and asks for passphrase and opens and mounts it, no scripts needed. For container files, the behavior varies by file manager.



Alternatives

+/-
Archive encryption: "zip -e", zipcloak, gpgtar.
Single-file encryption: "vim -x", aescrypt, bcrypt.

"7z" files: "sudo apt install p7zip-full", and then maybe Archive Manager will handle 7z files. If not, "7za x FILE.7z" or "7za x FILE.7z.001" to extract files.

Cryptomator: started as cloud-only, but now supports local folder encryption ? article





VeraCrypt



VeraCrypt
VeraCrypt on SourceForge
Tails' "Using VeraCrypt encrypted volumes"
Security-in-a-Box's "VeraCrypt for Windows - Secure File Storage"
reddit thread about VeraCrypt and Windows updates



Installed VeraCrypt by downloading a script and then running it via "sudo bash", but I don't see VeraCrypt in the GUI menu of applications. It showed up later. Made a couple of containers, and they work fine.

To update VeraCrypt:
Download veracrypt-*-setup.tar.bz2 or veracrypt-*-.deb file from Downloads.
If *.bz2, double-click it and extract the files from it.
Unmount all VC volumes and quit out of VeraCrypt.
Double-click on veracrypt-*-setup-gui-x64 or *.deb file.
See text-GUI installer, click buttons to install.

There is a PPA: https://launchpad.net/%7Eunit193/+archive/ubuntu/encryption But then you're trusting whoever owns that PPA. [Apparently VeraCrypt is not in the standard repos and stores because the TrueCrypt license is not clear/standard, the TrueCrypt devs are anonymous, it's unclear whether the VeraCrypt license is valid at all.]



Choices and policies

+/-
If you want an encrypted Container or partition to be accessible on Windows, choose exFAT or NTFS (not ext*) for the filesystem inside it. There are freeware utilities such as Linux Reader that can read ext* filesystems on Windows, and now WSL2 can do ext4, but maybe it's better to just use a filesystem that Windows can understand natively. Relevant discussion. Article. On Linux, you may lose symlinks when copying from ext4 to exFAT or NTFS ? exFAT is best choice: Linux kernel will be adding deeper support for it mid-2020, and Mac OSX also supports it. On Linux 5.4 kernel, "sudo apt install exfat-fuse exfat-utils". But exFAT doesn't allow some characters in filenames, which complicates things when copying from ext4 to exFAT.

For a big external disk, it's far easier/quicker to make a full-disk/full-partition VC-encrypted volume, rather than create a non-encrypted partition with a filesystem in it and then a VeraCrypt container file in that filesystem. Just leave the drive "raw" (no partitions), use VeraCrypt to make an encrypted volume (Linux: on the drive /dev/sdb, not a partition /dev/sdb1), using VC's "quick format" option. You will have to have the VeraCrypt application already installed on any computer where you want to access that drive, which seems okay. One danger: when you attach such a drive (full-volume VC-encrypted) to a Windows machine, Windows will say something like "drive looks unformatted, want me to format it for you ?". You must be VERY careful to say "no" every time you attach that drive. [Some people say you can disable this behavior via "delete the drive letter in Disk Management".] Clippy want to format hard disk

If you want extra security, when you create an encrypted container or partition, you could choose a stacked cipher instead of the default AES, and some hash function other than the default HMAC-SHA-512. And if you create multiple containers/volumes, you could use different settings for each. You also could use keyfiles, hidden containers, PIM settings. But at some point you are more likely to fool or confuse yourself rather than some adversary. It might be best to just stick to the defaults, or use just one group of settings and apply it to all of your volumes.

Good practice: right after you create an encrypted container, before you put anything in it, turn off compression, disable COW, turn off atime updating: "chattr -c +C +A FILENAME". These settings are ignored and harmless in filesystems that don't support them.

Good practice: after you create an encrypted container or partition, click the "Volume Tools ..." button and select "Backup Volume Header ..." and save the backup to somewhere else. Might save you if the volume gets corrupted somehow. [Although actually VeraCrypt already maintains a backup copy of the header inside the volume. So you'd have to lose a bunch of blocks to lose both the primary and backup headers inside the volume, and need to use your external backup copy.]



Quirks and issues

+/-
A couple of quirks with VeraCrypt containers, at least in Linux: Firefox doesn't properly remember "last folder saved to" if it's in a VC container, and Nemo renaming a folder in a VC container doesn't work if all you're changing is capitalization ?

A quirk with VeraCrypt containers related to cloud backup: When you modify a file in the container, the modified-time and checksum of the whole container change. So if you add a 1-KB file to a 10 GB container, the backup software will say "okay, have to write this whole 10-GB file up to the cloud". (Same is true of any other aggregate format, such as ZIP or RAR.)

I used MEGAsync for a while, but had a couple of bad experiences where somehow it appeared that the file on my laptop (what I considered the master) was older than the copy in MEGAsync (what I considered a backup), and MEGAsync synced the old file down to my laptop, and I lost data. Seemed to happen with VeraCrypt containers in use; I would forget to dismount them, and MEGAsync would see them as old. VeraCrypt on Linux has some quirks with updating the modified time on the container file. https://sourceforge.net/p/veracrypt/tickets/277/

Auto-mount functionality is only for encrypted volumes/partitions, not on containers.
Same with quick-format; not available when creating a container.
To mount a container using CLI:

veracrypt --slot=1 /home/user1/.../MYCONTAINERFILENAME /media/veracrypt1

Apparently there is a longstanding performance issue with VeraCrypt, although it may show up only on heavy multi-threaded benchmarks or under extreme loads, mostly with SSD ? reddit thread



Checking and fixing a VeraCrypt volume

+/-
It seems there are really just two things that can go wrong:
  • VeraCrypt volume header gets damaged somehow.

    Even with correct password, you will get a message "can't mount because wrong password or not VeraCrypt volume".

    In this case, maybe copy whole volume to a known-good disk, check original for disk errors, then restore VeraCrypt volume header from a header backup.

  • Contents of VeraCrypt volume after the VC header gets damaged somehow.

    The volume will be listed in VeraCrypt as mounted properly, but the operating system will say "unrecognized filesystem" (if the filesystem header is damaged) or something else if blocks after the FS header are damaged.

    In this case, maybe copy whole volume to a known-good disk, check original for disk errors, then run filesystem-repair utilities.


VeraCrypt's "Troubleshooting"

From VeraCrypt's "Frequently Asked Questions":
File system within a VeraCrypt volume may become corrupted in the same way as any normal unencrypted file system. When that happens, you can use filesystem repair tools supplied with your operating system to fix it. In Windows, it is the 'chkdsk' tool. VeraCrypt provides an easy way to use this tool on a VeraCrypt volume: Right-click the mounted volume in the main VeraCrypt window (in the drive list) and from the context menu select 'Repair Filesystem'.

I think the procedure (for a VC container) is:
There are TWO filesystems: the one inside the disk partition (call it FS-D) and the one inside the container file (call it FS-C).

Now, a sector goes bad that happens to be used by the container file.
Do this:
  1. First, make sure the disk is not throwing hardware errors. Maybe use a SMART utility. article
  2. Run fsck/chkdsk or whatever to repair the disk filesystem (FS-D). The OS might do this automatically. The filesystem has to be unmounted while it's being checked and repaired. (Note: "chkdsk /r" checks much more than "chkdsk /f".)
  3. Open VC container without mounting filesystem that's inside it (FS-C): Click Mount button, click Options button to expand the dialog, near bottom check the box "Do not mount". Type password, device will appear in list but the "Mount Directory" column will be empty.
  4. If opening the VC container fails, both volume headers inside the container are bad. Use a backup copy of the volume header, that you saved elsewhere. VC has a "repair" function to do this ?
  5. Right-click on the device in the list and select either the Check Filesystem or Repair Filesystem menu item. A small terminal window will open and FSCK will run. If instead you get an error dialog "xterm not found", go to CLI and run "apt install xterm", then try again.
  6. Mount the container in VeraCrypt, and check dmesg to see that there are no error messages. Nemo does not report dirty filesystems (bad).
  7. Then you're good, no need to copy or move the container.

John Wesorick's "Running fsck on a Truecrypt Volume"
CGSecurity's "Recover a TrueCrypt Volume"
Silvershock's "Opening/Decrypting VeraCrypt drives without mounting them (for fsck)"



How secure is a VeraCrypt volume ?

+/-
My "veracryptcrack" project
My "veracryptcrack2" project
Oleg Afonin's "Breaking VeraCrypt containers"
NorthernSec / VeraCracker
Thynix / veracrypt-bruteforce
If you need to generate a wordlist: crunch

From someone on reddit:
+/-
There's essentially two parts to a VeraCrypt volume, the encrypted data and the encrypted header.

When you move your mouse around during volume creation, you are helping add entropy to generate the keys to the encrypted data.

For multiple reasons, there needs to be a header that tells the volume's size and other important data. It is also encrypted to help hide the fact it's specifically a VeraCrypt volume. The key to that is derived from hashing your password in a protocol called PBKDF2. So as usual with cryptographic systems that have a password, it's the password that is the weakest link. Brute-force attacks will never be used against the keys, but the password instead. So the password must be as strong as possible because that gives you access to the encrypted data.




Resizing a VeraCrypt volume

+/-
VeraCryptExpander utility. Windows-only.

First, outside of VeraCrypt (Windows Disk Management), resize the partition. Then run VeraCryptExpander to resize filesystem inside VC volume ?



Bug-reporting: VeraCrypt / Tickets
Martin Brinkmann's "How to change the PIM of a VeraCrypt volume"
Andrew D. Anderson's "Auto-mounting a VeraCrypt volume under Ubuntu / Debian Linux"



Make a Btrfs filesystem inside a VeraCrypt volume

+/-
I'm using Ubuntu GNOME 20.04.

Install software:

sudo apt install btrfs-prog
man mkfs.btrfs

Important: In the following steps, change device names and labels as appropriate for your system. Best to have no extra removable or optional devices attached while doing operations, to avoid confusion.

Try Btrfs first on a real device (no VC) to make sure it works:

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

# Make Btrfs filesystem:
sudo mkfs.btrfs --force --label LABEL /dev/sdb
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".

sudo btrfs check /dev/sdb

# See that new filesystem got mounted, maybe as /media/user1/LABEL:
df | grep sdb			# doesn't show up here ?
blkid | grep sdb		# doesn't show up here !
lsblk --fs /dev/sdb

sudo umount /dev/sdb
# Detach USB drive from system.
# Attach USB drive to system.

sudo chmod a+w /media/user1/LABEL

# Go to file explorer, see that new filesystem got mounted.
# Copy some files to it.
# In file explorer, unmount, remove drive, attach again, see it appear again.

Mount an existing VeraCrypt-encrypted volume to check device names:

# Attach USB drive to system.

# Run VeraCrypt GUI and mount volume.

# Check names:
lsblk --fs /dev/sdb
df -h | grep verac
# On my system, /dev/sdb to /dev/mapper/veracryptN mounted onto /media/veracryptN

# In VeraCrypt GUI, dismount volume.

# Detach USB drive from system.

VeraCrypt-encrypt a volume then add Btrfs:

# Attach USB drive to system.
# In file-explorer, if drive appears, ignore it.

# Run VeraCrypt GUI to create volume.
# Choose filesystem type "Linux Ext4" and "Quick format".
# Choose "I will mount the volume only on Linux".

# In VeraCrypt GUI, mount the volume.

# In file-explorer, in "Other Locations", drive should appear, click to unmount it.

# Check names:
df -h | grep verac		# volume does not appear

# Make Btrfs filesystem:
sudo mkfs.btrfs --force --label LABEL /dev/mapper/veracryptN

sudo btrfs check /dev/mapper/veracryptN

lsblk --fs /dev/mapper/veracryptN

# In VeraCrypt GUI, dismount the volume.

# Detach USB drive from system.

# Attach USB drive to system.
# In file-explorer, if drive appears (shouldn't), ignore it.

# In VeraCrypt GUI, mount the volume.

sudo chmod a+w /media/veracryptN

# In file-explorer, in "Other Locations", drive should appear.
# Copy files to it.

# Dismount in VC GUI, detach, attach, mount in VC GUI.
# Check files.

# Now mount/unmount can be done through VeraCrypt
# as usual; no need to do any special Btrfs commands.

One complication: it's best to mount a non-system Btrfs filesystem with the "noatime" flag specified, to avoid triggering COW on metadata when you read a file. In VeraCrypt GUI, specify that in Settings / Preferences / Mount Options / mount options. In VeraCrypt CLI, add "--fs-options=noatime". I would do this for all non-system volumes, regardless of filesystem type. Probably not a good idea to do it for a system volume, although you could do it to everything under your home directory via "chattr -R +A ~".

To see the flags after a volume is mounted:

mount | grep veracrypt5
# for Btrfs you probably want similar to: rw,noatime,space_cache,subvolid=5,subvol=/

I ended up doing that for all of my VeraCrypt volumes, regardless of filesystem type.

Make an encrypted ZFS filesystem (not using VeraCrypt)

+/-
I was going to try making a ZFS filesystem inside a VeraCrypt volume, but ZFS supports encryption natively, so no need to use VeraCrypt.

I'm using Ubuntu GNOME 20.04.

https://itsfoss.com/zfs-ubuntu/
https://wiki.ubuntu.com/Kernel/Reference/ZFS

I've read on /r/ZFS that ZFS is not intended for use with removable/USB drives. It's intended for enterprise large static configurations. It should work with USB, just USB inherently is less reliable.

From someone on /r/ZFS:
"When you mount the ZFS pool to the system, it mounts to a directory in your filesystem. It won't show as a separate volume."

Apparently installing zfs-fuse would remove zfsutils-linux. I'm told zfs-fuse would give lower performance.

Install software:

sudo apt install zfsutils-linux zfs-dkms
# Probably get a license dialog, have to press Return for OK.
# May have to reboot at this point.

man zfs
man zpool
zfs list

Important: In the following steps, change device names and labels as appropriate for your system. Best to have no extra removable or optional devices attached while doing operations, to avoid confusion.

zfs unencrypted first on a real device (no VC) to make sure it works:

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

zfs list
sudo zpool create MYPOOL /dev/sdb
sudo zpool status MYPOOL
zfs list
df -h | grep POOL
mount | grep zfs

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".
zfs list
df -h | grep POOL
mount | grep zfs
ls -ld /MYPOOL/fs1
lsblk --fs /dev/sdb
# Now filesystem is mounted and usable.
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

sudo zpool scrub MYPOOL		# test data integrity
sudo zpool status -v MYPOOL	# if "scrub in progress", do again

sudo zpool export MYPOOL
sudo zpool status MYPOOL
# Detach USB drive from system.

# Attach USB drive to system.
sudo dmesg
sudo zpool import MYPOOL
zfs list
sudo zpool status MYPOOL
ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# Detach USB drive from system.

ZFS encrypted on a real device:

# https://www.medo64.com/2020/05/installing-encrypted-uefi-zfs-root-on-ubuntu-20-04/
# https://www.medo64.com/2020/04/installing-uefi-zfs-root-on-ubuntu-20-04/
# https://www.medo64.com/2020/06/testing-native-zfs-encryption-speed/
# https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/
# https://linsomniac.gitlab.io/post/2020-04-09-ubuntu-2004-encrypted-zfs/

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

zfs list
sudo zpool create -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase -f MYPOOL /dev/sdb
# Passphrase minimum of 8 chars.

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".
ls -ld /MYPOOL/fs1
lsblk --fs /dev/sdb
# Now filesystem is mounted and usable.
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

# Test data integrity:
sudo zpool scrub MYPOOL
sudo zpool status -v MYPOOL		# if "scrub in progress", repeat

sudo zpool export MYPOOL
# Detach USB drive from system.

# Attach USB drive to system.
sudo zpool import -l MYPOOL
ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# Detach USB drive from system.

ZFS using an encrypted VeraCrypt volume (FAILED):

# Attach USB drive to system.
# In file-explorer, if drive appears, ignore it.
sudo dmesg
sudo wipefs --force --all /dev/sdb

# Run VeraCrypt GUI to create volume.
# Choose filesystem type "Linux Ext4" and "Quick format".
# Choose "I will mount the volume only on Linux".

# In VeraCrypt GUI, mount the volume.

# Check names:
df -h | grep verac

# In file explorer, Other Locations, find device and unmount it.

sudo zpool create -f MYPOOL /dev/mapper/veracryptN
zfs list
sudo zpool status MYPOOL

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
ls -ld /MYPOOL/fs1
lsblk --fs /dev/mapper/veracryptN
# Now filesystem is mounted and usable.
df -h | grep MYPOOL
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

# Test data integrity:
sudo zpool scrub MYPOOL
sudo zpool status -v MYPOOL		# if "scrub in progress", repeat

sudo umount /MYPOOL/fs1
sudo zpool export MYPOOL
# In VeraCrypt GUI, dismount the volume.
# Detach USB drive from system.

# Attach USB drive to system.
# In VeraCrypt GUI, mount the volume by clicking "Mount",
# type password, click "Options", check "Filesystem - do not mount".
#sudo zpool create -f MYPOOL /dev/mapper/veracryptN
sudo zpool create -f MYPOOL
sudo zfs create MYPOOL/fs1
sudo mkdir /MYPOOL/fs1
# FAIL: can't figure out how to get filesystem defined in pool without creating it anew
lsblk --fs /dev/mapper/veracryptN
sudo zfs mount MYPOOL/fs1
sudo mount -t zfs /dev/mapper/veracryptN /MYPOOL/fs1

ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# In VeraCrypt GUI, dismount the volume.
# Detach USB drive from system.




VeraCrypt on Linux uses FUSE to implement the filesystem "driver". Apparently the veracrypt application itself is used as GUI app, CLI app, and FUSE adapter/handler/daemon. "man fuse" https://github.com/libfuse/libfuse



Sarbasish Basu's "How to mount encrypted VeraCrypt or other volumes on an Android device"
EDS (Encrypted Data Store)





LUKS encryption



LVM is a volume manager; LUKS (Linux Unified Key Setup) is an encryption module.

LVM/LUKS as used on Ubuntu* distros to do "full-disk-encryption" really isn't "whole disk" encryption: partition table and boot partition are not encrypted. (Apparently there is a tricky way to also encrypt the boot-loader second-stage file-system: article)

From /u/HonestIncompetence on reddit:
+/-
LVM is not encryption. It's "Logical Volume Management", which is basically (very simplified) a smarter way to do partitioning, giving you much more flexibility later, e.g. to modify (grow/shrink) volumes, add new drives to the "pool" of storage, and things like that. LVM is not related to encryption, you can use LVM with or without encryption, and you can use encryption with or without LVM.

On a very technical nit-picky level, LUKS itself isn't encryption either, but a standard on how to manage encryption/keys/encrypted devices. The actual encryption itself is done by dm-crypt. Using dm-crypt without LUKS is a bit more of a hassle and not really common, so in practice "LUKS" is pretty much synonymous with "encryption" in the Linux world.

The most common setup, which is what most installation wizards offer, is as follows: you have two partitions on the drive, a small /boot partition and a large one for everything else. /boot is separate because it must not be encrypted, to be able to boot something that then decrypts the rest. The large partition is then encrypted using LUKS. Inside the LUKS container LVM is used to create whatever volumes ("partitions") are needed/desired. The nice thing about using LVM on LUKS is that you have full flexibility to do whatever you you like inside the encrypted part, and no matter what you do in there you can unlock it with a single password (unlike if you use several partition each with its own LUKS and no LVM).



Tails' "Creating and using LUKS encrypted volumes"
Tyler Burton's "How to migrate from TrueCrypt to LUKS file containers"
Kees Cook's "GRUB and LUKS"
Unix Sheikh's "Real full disk encryption using GRUB on Artix Linux for BIOS and UEFI"

Milosz Galazka's "How to backup or restore LUKS header"
Vivek Gite's "How to backup and restore LUKS header"

Alvin Alexander's "Linux Mint: How to change the disk encryption password"
Vivek Gite's "How to change LUKS disk encryption passphrase"

Michael Larabel's "The 2019 Laptop Performance Cost To Linux Full-Disk Encryption"
Ignat Korchagin's "Speeding up Linux disk encryption"

Pawit Pornkitprasan's "Full Disk Encryption on Arch Linux backed by TPM 2.0"
Lennart Poettering's "Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248"
ArchWiki's "Trusted Platform Module"
linux-luks-tpm-boot
"lsmod | grep tpm"

Vivek Gite's "How to enable LUKS disk encryption with keyfile on Linux"
Maurits van der Schee's "LUKS with USB unlock"
Maurits van der Schee's "LUKS with SSH unlock"
openSUSE Wiki's "SDB:Encrypted root file system"
Brian Smith's "Using Linux System Roles to implement Clevis and Tang for automated LUKS volume unlocking"

Egidio Docile's "How to use LUKS with a detached header"

Maurits van der Schee's "LUKS recovery from initramfs shell"

Nuke the drive:
milosz's "How to erase LUKS header"

Set so logging in with a "nuke password" erases LUKS slots: article, thread

Network-based "nuke": juliocesarfort / nukemyluks

LibreCrypt (LUKS on Windows, but old and unsupported ?)

Matthew Garrett's "PSA: upgrade your LUKS key derivation function"




# See all algorithms available in your kernel:
cryptsetup benchmark

# Dump parameters:
sudo cryptsetup luksDump /dev/sda6 >saved.luksDump.sda6.txt

# See what cipher algorithm is used on data:
sudo cryptsetup luksDump /dev/nvme0n1p3 | grep cipher

# See what key-derivation function is used (argon2id best):
sudo cryptsetup luksDump /dev/nvme0n1p3 | grep PBKDF

# Make backup of header:
sudo cryptsetup luksHeaderBackup /dev/sda6 --header-backup-file saved.luks_backup_sda6

# See if passphrase works:
printf "MYPASSPHRASE" | sudo cryptsetup luksOpen --test-passphrase /dev/sda6 && echo "Works"



Encryption make things more difficult if something goes wrong in booting before the disk is decrypted and mounted. Boot from an OS on USB and then:

# Find out device for encrypted partition:
lsblk

# Map encrypted partition to /dev/mapper/* and /dev/dm-* devices:
sudo cryptsetup open /dev/sda6 OLDROOT

# Mount partition:
sudo mkdir /mnt/OLDROOT
sudo chmod 777 /mnt/OLDROOT
sudo mount /dev/dm-9 /mnt/OLDROOT

# Now you can access the filesystem:
cd /mnt/OLDROOT
ls

# Get out:
cd /
sudo umount /mnt/OLDROOT
sudo cryptsetup close OLDROOT

# Shut down OS on USB, remove USB, boot from internal system.
System 76's "Login from Live Disk (Chroot)"

lsblk
sudo cryptsetup open /dev/nvme0n1p3 OLDROOT
sudo mkdir /mnt/OLDROOT
sudo chmod 777 /mnt/OLDROOT
sudo mount /dev/mapper/ubuntu-root /mnt/OLDROOT
sudo chroot /mnt/OLDROOT


Creating a LUKS encrypted full-disk volume

+/-

# started with Disks app showing no partitions

sudo cryptsetup --type luks2 --iter-time 4100 --verify-passphrase luksFormat /dev/sdb
# I added the --iter-time setting; just wanted something not default.
# will ask twice for passphrase

# Make a backup of the LUKS header:
sudo cryptsetup luksHeaderBackup /dev/sdb --header-backup-file LUKS2.HeaderBackup.MYVOL1

# MYVOL1 is an arbitrary, temporary name.
# Device /dev/mapper/MYVOL1 will appear.
sudo cryptsetup luksOpen /dev/sdb MYVOL1
# will ask for passphrase

# Format the volume.
# Use --mixed if size less than 109 MB.
# VOLNAME is a permanent name for the filesystem.
sudo mkfs.btrfs --label VOLNAME --mixed /dev/mapper/MYVOL1

# If you forget the label (I did), later do:
sudo btrfs filesystem label /dev/mapper/MYVOL1 VOLNAME

sync

sudo cryptsetup luksClose MYVOL1

eject /dev/sdb		# probably get "unable to open"

# unplug the USB drive, and plug it in again

# File manager should detect the drive, see that it is LUKS,
# ask for passphrase, and mount it.

# https://unix.stackexchange.com/questions/319592/set-default-mount-options-for-usb
# Change mount options:
# Run Disks app.
# Select the device in the left pane.
# Select the filesystem (lower "Volume") in the main pane.
# Click on the "gears" button below.
# Select "Edit mount options". 
# Slide "User Session Defaults" to left to turn it off.
# Un-click "Mount at system startup".
# Add a name in the "" field, it shows up in fstab later.
# Edit mount options (maybe add "noatime").
# Click Save.
# Quit out of Disks app.

cat /etc/fstab		# to see mods made by Disks app
# note the UUID of the new disk

# In file manager, dismount filesystem.
# Unplug drive and plug it in again.
# Filesystem may get mounted automatically.

mount				# to see the new mount flags
sudo chmod 777 /mnt/THEUUID

Jordan Williams' "Encrypt an External Disk on Linux"



Creating a LUKS-encrypted container file

+/-
From Oleg Afonin's "Breaking LUKS Encryption":
"LUKS can be used to create and run encrypted containers in a manner similar to other crypto containers such as VeraCrypt."

Following article1 (seems best to me):

dd if=/dev/zero of=vol1.luks conv=fsync bs=1 count=0 seek=50M

sudo cryptsetup --type luks2 --iter-time 4100 --verify-passphrase luksFormat vol1.luks
# I added the --iter-time setting; just wanted something not default.
# Will ask twice for passphrase to set on volume.

# Make a backup of the LUKS header:
sudo cryptsetup luksHeaderBackup vol1.luks --header-backup-file LUKS2.HeaderBackup.MYVOL1

# MYVOL1 is an arbitrary, temporary name.
# Device /dev/mapper/MYVOL1 will appear.
sudo cryptsetup luksOpen vol1.luks MYVOL1
# will ask for passphrase

# Format the volume.
# Use --mixed if size less than 109 MB.
# VOLNAME is a permanent name for the filesystem.
sudo mkfs.btrfs --label VOLNAME --mixed /dev/mapper/MYVOL1

# If you forget the label (I did), later do:
sudo btrfs filesystem label /dev/mapper/MYVOL1 VOLNAME

# vol1 is an arbitrary, temporary mount-point name.
sudo mkdir /mnt/vol1
# I like to use noatime; maybe you don't.
sudo mount -o defaults,noatime /dev/mapper/MYVOL1 /mnt/vol1
sudo chown -R $USER /mnt/vol1

sudo umount /mnt/vol1
sudo cryptsetup luksClose MYVOL1

Following article2:

SIZE=500
FILE=xxx.luks
fallocate -l ${SIZE}M $FILE
dd if=/dev/urandom of=$FILE conv=fsync bs=1M count=$SIZE

# run Disks utility
# select "Attach Disk Image"
# un-check "Set up read-only loop device"
# select file xxx.luks
# click Attach
# Now the file is attached as if it were a hard drive.

# Select the new "drive", click the Gears icon,
# choose type Encrypted (LUKS + ext4), set passphrase etc.
# Click Format.

# Eject the drive in Disks or in file manager.

Use Disks application to Attach the container volume when you want to use it.

Or: associate file-extension ".luks" with "Disk Image Mounter" application, then you can double-click on any "*.luks" container file to mount it. BUT: it will mount read-only ! You have to REMOVE the association to "Disk Image Mounter" and create an association to:

'/usr/bin/gnome-disk-image-mounter' --writable

# Note: This is not available in KDE, can't find any equivalent.
# Maybe clevis-luks, clevis-luks-bind ?
# Maybe create a new Dolphin "service menu" ?
# I created a new "Service Menu" to do it: lukscontainerfile.

Or to mount and then un-mount:

sudo cryptsetup luksOpen xxx.luks xxx
sudo mount -o defaults,noatime /dev/mapper/xxx" /mnt/xxx

sudo umount /mnt/xxx
sudo cryptsetup close xxx

Or to mount and then un-mount:

udisksctl loop-setup -f xxx.luks
udisksctl unlock -b /dev/loop0
udisksctl mount -b /dev/dm-0

udisksctl unmount -b /dev/dm-0
udisksctl lock -b /dev/loop0
udisksctl loop-delete -b /dev/loop0

Change mount options to add noatime: Run Disks application.

Meer-Web's "Create an encrypted container in linux"
Luksman CLI utility



LUKS logo

How secure is a LUKS* volume ?

+/-
My understanding is that there is no stored hash of the password (passphrase) in the LUKS header. So there is nothing to extract and then try to match with hashcat or similar. What is stored is an encrypted value V of the key K used to encrypt the actual data. So when the user opens the volume, the passphrase P and a salt S is used to transform V to produce K, and then K is used to decrypt the data. If P is wrong, you don't find out until you get the decrypted data and find out that it's not a valid filesystem.

From https://blog.elcomsoft.com/2020/08/breaking-luks-encryption/ :
"Unlike TrueCrypt/VeraCrypt, LUKS does store the information about the selected encryption settings in the encryption metadata, making it possible to detect the encryption settings prior to launching the attack."

Mike Fettis' "Cracking linux full disc encryption, luks with hashcat"
Forensic Focus's "Bruteforcing Linux Full Disk Encryption (LUKS) With Hashcat"
Diverto's "Cracking LUKS/dm-crypt passphrases"
milosz's "How to perform dictionary attack on LUKS passphrase"
Darell Tan's "Bruteforcing LUKS Volumes Explained"
glv2 / bruteforce-luks (uses cryptsetup API to be faster)
Terahash chart of random brute-force alphanum password-cracking times
Cryptsetup Wiki
Milan Broz's "LUKS2 On-Disk Format Specification 1.0.0"

Make a LUKS1 volume, and dictionary-attack it with hashcat:

sudo apt install hashcat	# or hashcat-nvidia
hashcat --version			# has to be 3.5.0 or better

# LUKS1 container file will be vol1.luks
dd if=/dev/zero of=vol1.luks conv=fsync bs=1 count=0 seek=50M

# Cracking will NOT work if you specify --iter-time
sudo cryptsetup --type luks1 --verify-passphrase luksFormat vol1.luks

# Cracking will NOT work unless you make a filesystem inside the container.
sudo cryptsetup luksOpen vol1.luks MYVOL1
sudo mkfs.ext4 /dev/mapper/MYVOL1
sudo cryptsetup luksClose MYVOL1

# LUKS1 container file is vol1.luks
# Dictionary is dict.txt; have container's password on one of the lines in it.
hashcat --force -m 14600 -a 0 -w 3 vol1.luks dict.txt -o luks_password.txt
# Keep typing "s" for "status" until done.
# See "Status ......: Cracked".
sudo cat luks_password.txt

Apparently hashcat 5.1.0 (and 6.2.4) does not support attacking LUKS2 volumes.

Apparently hashcat 6.2.4 on Manjaro requires use of a discrete GPU. Adding "-d 1" or "-D 1" to try to get it to use CPU didn't work. Installing "opencl-amd" (I have AMD CPU) got it going. But then it didn't crack the password, eventually screwed up the GUI, had to reboot.



Sarbasish Basu's "How to mount encrypted VeraCrypt or other volumes on an Android device"
EDS (Encrypted Data Store)





Solid-State Drive (SSD)



Note: an SSD is not just an HDD with chips instead of platters. Usually an SSD will have a cache, and firmware that does block-level (maybe 128 KB) operations, and implements a mapping between sector numbers from OS and block numbers in the chips, and does wear-leveling, and has over-provisioning. [Some fancier HDDs may have the same features.]

So the SSD may lie to you about performance, and "overwriting" a sector probably won't result in the data being gone out of the chips, and "deleted" data can not be recovered by running a recovery utility.



SSDs and HDDs seem to have quite different failure modes. Usually an HDD will start throwing bad sectors or tracks, and most of disk still can be read before total failure occurs. An SSD may fail suddenly and completely, with none of the data retrievable. In some rare cases of extreme accumulated use, an SSD may fail by putting itself into read-only mode.

So, a HDD should be cheaper and higher-capacity and might fail more gradually. An SSD should be faster and impact-resistant but could fail suddenly and completely.



Trim

+/-
Why is trim needed, and what is it ?

From Wikipedia:
"A trim command (known as TRIM in the ATA command set, and UNMAP in the SCSI command set) allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered to be 'in use' and therefore can be erased internally."

When a "delete" operation is done at the filesystem level, there is no corresponding "delete" at the device level; the device API has "read" and "write" operations, but not "delete".

A trim command notifies the SSD that some block is no longer being used.

The alternative to trim is "live discard", enabled by "discard" in the mount options. "Live discard" is becoming (kernel 6.2) the default for SSDs (article). "async=nodiscard" in the mount options would disable "live discard", and then you should be using TRIM.

Alan Formy-Duval's "Extend the life of your SSD drive with fstrim"
Justin Ellingwood's "How To Configure Periodic TRIM for SSD Storage on Linux Servers"
Chris Siebenmann's "SSDs and their support for explicitly discarding blocks"
Chris Siebenmann's "Using TRIM on SSDs with ZFS"

From reddit: "Does external SSD drive support TRIM ?"
"It depends on the usb-to-sata/nvme adapter. You need to make sure that the controller in the external SSD (or SSD enclosure) you want to buy supports trim passthrough. Do internet search for info about the model you want to buy."

Testing TRIM:
ICTBanking article


TRIM and LUKS

+/-
Do "lsblk --discard". If DISC-GRAN and DISC-MAX values for LUKS volume are non-zero, TRIM should work and no change is needed.

See which filesystems have been trimmed: "journalctl | grep trimmed".

To fix DISC-GRAN and DISC-MAX values:
From Jay Ta'ala's "Enable periodic TRIM - including on a LUKS partition":
In /boot/grub2/grub.cfg (in openSUSE, /etc/default/grub), add ":allow-discards" to the cryptdevice argument on the GRUB_CMDLINE_LINUX_DEFAULT, and add the kernel parameter "rd.luks.options=discard". Then rebuild GRUB menu.
"Kernel Parameters" section
[Another way is to create /etc/crypttab.initramfs ? Containing something like 'btrfsCRYPT PARTLABEL="LUKS" none timeout=0,discard' ? Then "mkinitcpio -P".]


"sudo dmsetup table /dev/mapper/luks-XXXXXXXXXXXX --showkeys" should show "allow_discards".

"mount | grep luks" does NOT need to show "discard"; that's for "live discard", not weekly TRIM.

"Live discard" is becoming (kernel 6.2) the default for SSDs (article). "async=nodiscard" in the mount options would disable "live discard", and then you should be using TRIM.

If using LUKS on SSD (probably only affects non-root volumes):
Add "discard" to end of line (4th field) in /etc/crypttab
Probably that field should contain "luks,discard".
See "For LUKS devices unlocked via /etc/crypttab" in Arch Wiki.

TRIM and Swap

+/-
Most distros run a TRIM via a systemd service about once/week.

To see what filesystems will be trimmed by fstrim service, "sudo fstrim -an".

From Fedora docs:
"The Linux swap code will issue TRIM commands to TRIM-enabled devices ..."
But I don't see the kernel code doing so, such as in swapfile.c

And from ArchWiki:
"If using an SSD with TRIM support, consider using discard in the swap line in fstab. If activating swap manually with swapon, using the -d/--discard parameter achieves the same."
Note: swap is not mounted as a filesystem, so mount's "discard" flag does not apply, and fstrim does mounted filesystems.




ArchWiki's "Solid state drive"
speed guide's "SSD Linux Tweaks"
stevea's "Setting up and using SSD drives in Fedora Linux"
Chris Siebenmann's "Understanding plain Linux NVMe device names"

Epic Game Tech's "Is full SSD slower? Is it a MYTH or a FACT?" (but not testing with many small writes ?)
Mauro Huc's "Why solid-state drive (SSD) performance slows down as it becomes full"




sudo apt install nvme-cli
man nvme

sudo nvme list
sudo nvme fw-log /dev/nvme0      # can store multiple versions
sudo nvme fw-log -o json /dev/nvme0
sudo nvme smart-log /dev/nvme0
sudo nvme error-log /dev/nvme0 | less

# See if an "NVMe fabric" is running ?
sudo nvme discover -t rdma -a IPADDRESS

# Get block size, but it may lie to you:
sudo blockdev --getbsz /dev/sdaN
sudo blockdev --getbsz /dev/nvme0n1
cat /sys/block/sda/queue/logical_block_size
cat /sys/block/sda/queue/physical_block_size
cat /sys/block/nvme0n1/queue/logical_block_size
cat /sys/block/nvme0n1/queue/physical_block_size

# See what sector sizes are supported:
sudo nvme id-ns -H /dev/nvme0n1 | grep 'LBA Format'
# You could reformat to the desired sector size:
#sudo nvme format /dev/nvme0n1 -l DESIREDLBAFID
# This destroys access to the existing filesystems on the device !
# Maybe run it in distro A, just before installing distro B.
# Or run it while booted from USB stick.
# Maybe do performance tests before and after.

sudo nvme id-ctrl /dev/nvme0 -H | less
sudo nvme show-regs /dev/nvme0 -H | less

# Trim command is known as TRIM in the ATA command set,
# and as UNMAP in the SCSI command set.

# See if SATA drive supports TRIM in ATA command set:
sudo hdparm -I /dev/sda | grep TRIM
# See if NVME drive supports Data Set Management (DSM), which should include Deallocate:
sudo nvme id-ctrl /dev/nvme0 -H | grep 'Data Set Management'
# USB external HDD: hdparm will work if USB interface identifies
# drive as UAS (USB-attached SCSI), but not if interface
# identifies drive as only "Mass storage device".
lsusb      # to find device's numbers
sudo lsusb -D /dev/bus/usb/BUSNUMBER/DEVICENUMBER | grep bInterface


# See if drive supports UNMAP in SCSI command set:
man sg_vpd		# in package sg3 or sg-utils
# See if supports TRIM command:
sudo sg_vpd -a /dev/nvme0n1p2 | grep 'Unmap'
# If returns nothing, then TRIM not supported ?

# Non-zero DISC-GRAN and DISC-MAX == TRIM support:
lsblk --discard

# Read paragraph in /etc/lvm/lvm.conf about
# parameter "issue_discards".  But
# https://wiki.archlinux.org/title/Solid_state_drive
# says no change needed.
# And want default value thin_pool_discards = "passdown".

# Trim supported at LUKS level ?
sudo dmsetup table
# See "allow_discards".

# Is OS doing TRIM via "discard" option in /etc/fstab,
# cron job, or systemctl fstrim.timer ?
mount | grep discard
sudo systemctl status fstrim.service fstrim.timer --full --lines 1000
sudo systemctl status btrfs-trim.service btrfs-trim.timer --full --lines 1000

# If using LUKS on SSD (maybe applies only to non-root device;
# root device will be handled in GRUB kernel command line):
# Add "discard" to end of line (4th field) in /etc/crypttab
# Probably that field should contain "luks,discard".
# See "For LUKS devices unlocked via /etc/crypttab" in
# https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)
# Also
# http://blog.neutrino.es/2013/howto-properly-activate-trim-for-your-ssd-on-linux-fstrim-lvm-and-dmcrypt/

# Manually do a TRIM ?
sudo fstrim -a -v
# should take 30+ seconds and not return an error

# Periodic scrub ?
ls /etc/systemd/system/btrfs*
ls /usr/lib/systemd/system/btrfs*
# Edit the timer file to modify frequency:
EDIT /usr/lib/systemd/system/timers.target.wants/btrfs-scrub@.timer
# Then:
sudo cp /usr/lib/systemd/system/btrfs-scrub@.service /etc/systemd/system/
sudo cp /usr/lib/systemd/system/timers.target.wants/btrfs-scrub@.timer /etc/systemd/system/
sudo systemctl daemon-reload
systemd-analyze verify btrfs-scrub@.timer
# Have to put the mount-point path in the name when enabling.
# "-" is encoding for "/".
# Also could use "btrfs-scrub@$(systemd-escape -p MOUNTPOINT).timer"
sudo systemctl enable btrfs-scrub@-.timer
systemctl list-timers --all
# Run it right now to test it:
sudo systemctl start btrfs-scrub@-.timer
sudo systemctl status btrfs-scrub@-.timer --full --lines 1000
# But I don't see anything to indicate that the
# service ran or a scrub happened.
# Maybe have to wait until the timer fires, a week from now.
systemctl list-timers --all
journalctl -u btrfs-scrub@-.service

# Is the firmware updatable ?
sudo fwupdmgr get-devices
# Look for updates:
sudo fwupdmgr refresh

# Test sequential read performance:
sudo hdparm -t /dev/nvme0n1p2

Also GUI app GSmartControl, but it didn't show SMART data on my SSD.



Reducing writes to disk

+/-
Reducing writes really is not necessary for longevity; modern SSDs have 100 TBW (or multiples of that) rated lifetime, which would be 10 years while writing 30 GB per day.


On filesystem mounts, use "noatime" option, and maybe "lazytime" option.


You could have no swap, or use zram, instead of a swap partition or swap-file. You could change "swappiness" to 10 or 0. See "Swap" section of my Using Linux page.


[Most of following assumes you have plenty of RAM.]

Firefox and Thunderbird have settings to put cache in RAM instead of disk. Doing so would reduce use of SSD. Can't find similar in chromium browsers.

/tmp and /var/crash and ~/.cache (also /var/log, but I don't recommend that) can be mounted in RAM (tmpfs) instead of disk. Doing so would reduce use of SSD. I have seen cautions that you should not put /var/tmp on tmpfs. [Note: if you have swap, there are times when data from tmpfs can be swapped out.] [Note: "echo $TMPDIR" may show another place used for temp files.]

Do Timeshift backups to external disk, not to system disk (safer, too).

Don't have a swap partition or swap-file on SSD.

System journal can be kept in RAM, not disk, but then you lose past history. Edit /etc/systemd/journald.conf to have "Storage=volatile" in [Journal] section.



From thread on StackOverflow:
+/-
There is no standard way for a SSD to report its page size or erase block size. Few if any manufacturers report them in the datasheets. (Because they may change during the lifetime of a SKU, for example because of changing suppliers.)

...

Best leave about 5% to 10% of unallocated space outside any partitions. Having overprovisioned space is of great help to SSDs in maintaining their performance in time.

[This] will leave a number of blocks on the "never touched by the OS" list. This will simplify the life of the microcontroller which needs to juggle block allocations and erasures; remember that the poor little microcontroller has no notion of file systems, and from its point of view all the blocks touched by the operating system are in use, unless of course the OS is kind enough to trim unused blocks from time to time. This is especially important for external SSDs, which quite often may not even expose a trim interface.

... so the microcontroller has more space to move around stuff coming from within the partitions, without touching the interior of the partitions in the intermediate stages.



Some SSD drives have two types of storage in them, a small faster cache (MLC), and then the slower main storage (TLC). Samsung EVO has this; Samsung Pro has only the faster storage (all MLC). Even faster is SLC.



Seen many places: using 4 KB block size everywhere is best, and align partitions to 1 MB or 2 MB boundaries.



"Some SSD drives have factory 'over-provisioning', which reserves a few % space of total capacity."

I asked about partitioning and over-provisioning, on reddit:
+/-
> I plan to have one partition for / and /home.
> I think I will always have 20% free and be
> doing weekly TRIMs. Should I just use all of the space
> visible to me as the single big partition ?

... just go ahead and use the entirety of space ... Modern SSDs have more space than advertised specifically for balancing, they do it on their own.

...

Always use LVM. Create the logical volumes as you need/want. But given that it is very easy to extend logical volume, create them with the size you need. Say, have a 30 GB logical volume for root filesystem, 1 GB volume for swap and 150 GB for home. You do not have to fill up all your SSD. If in a few months time you see that you need more space in /home, extend it.



Might as well do fsck at every boot.
For / as ext4 filesystem on LUKS:
"sudo tune2fs -c 1 /dev/mapper/vgkubuntu-root" (Kubuntu)
"sudo tune2fs -c 1 /dev/mapper/data-root" (Pop!_OS)
"sudo tune2fs -l /dev/mapper/NAME | grep 'Last checked'" (to see when last checked)




For drives that support encryption on the SSD (in SSD firmware or hardware), a special utility app from the manufacturer will be needed to enable/disable the encryption.



Total erase (before selling or disposal):
+/-
There is a sanitize command (erases all blocks), and a secure-erase / format command (changes encryption key; two forms of it; you want "User Data Erase"). Use one of them. But you can't be sure what the command actually is doing. There have been cases where manufacturers lied about what their drives were doing. I'm told all SSDs are encrypted these days, so secure erase mainly wipes the key and sets a new one, making all data in chips useless, but still there.

"Sanitize is a newer SCSI command, intended as an extension to the existing ATA Secure Erase command. If the drive supports Sanitize, use that."

What is the relationship among the commands hdparm, sdparm, smartctl, nvme, and blkdiscard, especially when used with an SSD ? There seems to be some overlap. Some apply to all devices that support command set X, while others apply to all devices with interface type Y ? Just try to use each one, see if it says "unsupported device type" ?

tinyapps.org's "NVMe Sanitize"
tinyapps.org's "NVMe Secure Erase"
tinyapps.org's "ATA Sanitize and hdparm"
Mendhak's "Securely wipe an SSD with its built in commands"
LSU Grok's "Erasing SATA Drives by using the Linux hdparm Utility"
"hdparm --sanitize-status /dev/sdx"
"hdparm -I /dev/sdx"
"sudo nvme id-ctrl /dev/nvmex -H"
"man nvme-format"
"man blkdiscard" and maybe then trim.


You could try overwriting the whole device, but that's not guaranteed to get all the data in the chips. The problem is that the device is doing over-provisioning (reporting less capacity than it actually has) but it's using all of the actual space more or less evenly (wear-leveling). So even if you make the OS write the "full capacity" of the drive, it's missing some percentage of the blocks. And those missed blocks are spread all over the device.

Still, you could try. Boot from an OS on a different device (maybe USB stick), maybe change SSD partition table to have one full-disk partition, turn off any compression if applicable, and write random data to the raw device of the SSD until it is full. Do that 2 or 3 times. You're probably overwriting 99% of the data. But you're shortening the life of the device, too.

Chris Siebenmann's "When securely erasing disks, who are you trying to stop?"
Chris Siebenmann's "How secure is merely discarding (TRIMing) all of a SSD's blocks?"



Some external SSDs use more power (up to at least 10 watts) than many USB 3 ports can provide (officially only up to 4.5 watts); this can cause them to behave erratically.



Dave Farquhar's "Fix your dead SSD with the power cycle method"





Miscellaneous



Periodically check the health of your drives

+/-
Run SMART utility (Disks application, or smartctl).

If HDD:
"sudo apt install smartmontools" and then "sudo smartctl -a /dev/sda | less"
"sudo apt install libatasmart-bin" and then "sudo skdump /dev/sda | less"


If drive is an NVMe SSD:
"sudo apt install nvme-cli" and then "sudo nvme smart-log /dev/nvme0"
"sudo apt install smartmontools" and then "sudo smartctl -a /dev/nvme0n1p2 | less"

How to interpret important values:
"Percentage Used": percent used of estimated lifetime of the device.
(100 / "Percentage Used") * "Data Units Written (in TB)" == rated lifetime in TBW.
When "Available Spare" falls to "Available Spare Threshold", drive goes read-only ?
"Unsafe Shutdowns": lost power to device before data in cache was written to media.
"Media and Data Integrity Errors": number of unrecovered data integrity errors.
"Warning Composite Temperature Time": minutes run above warning temp.
"Critical Composite Temperature Time": minutes run above critical temp.
https://www.intel.com/content/www/us/en/support/articles/000056596/memory-and-storage/client-ssds.html
http://www.cropel.com/library/smart-attribute-list.aspx (HDD)
https://en.wikipedia.org/wiki/S.M.A.R.T.#Known_ATA_S.M.A.R.T._attributes

To get more info about SSD SMART errors:
"sudo nvme error-log /dev/nvme0n1p2"

How to interpret errors, especially cmdid values ? No "opcode" field ?
https://www.smartmontools.org/ticket/1300
https://github.com/linux-nvme/libnvme/tree/master/src/nvme
nvme_io_opcodes
https://metebalci.com/blog/a-quick-tour-of-nvm-express-nvme
https://cooboos.github.io/post/nvme-amdin-cmd/
https://www.smartmontools.org/static/doxygen/nvmecmds_8cpp_source.html start at line 294

If the drive is external (USB) and does not show up in SMART utility, maybe see SAT with UAS under Linux (especially the "Temporary Settings" section).

Thomas-Krenn's "SMART tests with smartctl"
Wikipedia's "S.M.A.R.T."
Oleg Afonin's "Predicting SSD Failures: Specific S.M.A.R.T. Values"
ZAR's "S.M.A.R.T. basics"
Chris Siebenmann's "SMART Threshold numbers turn out to not be useful for us in practice"
Chris Siebenmann's "Disk drives can have weird SMART values for their power on hours"
Chris Siebenmann's "NVMe disk drives and SMART attributes (and data)"
Chris Siebenmann's "The names of disk drive SMART attributes are kind of made up (sadly)"

I get an email every day from smartd daemon about "number of Error Log entries increased" on my SSD. Support for my laptop said "Those errors are not related to the SSD drive status and SMART status is OK, it's just telling you that there are two additional entries to the log." and pointed me to openmediavault thread

How to enable smartd:

ls /etc/systemd/system/smartd.*
less /etc/smartd.conf

sudo pamac install smartmontools
ls /usr/lib/systemd/system/smartd.*
sudo cp /usr/lib/systemd/system/smartd.service /etc/systemd/system/
less /etc/smartd.conf
sudo systemctl daemon-reload
sudo systemctl enable smartd
sudo systemctl start smartd

sudo systemctl status smartd --full --lines 1000
journalctl -u smartd

Disk test in BIOS or GRUB/EFI ?
See if your drive's vendor has a vendor-specific utility.



Drive Interfaces

+/-

Form-factors

+/-
  • 2.5 Inch:

    Standard form-factor for hard disks in laptops.

  • M.2:

    Smaller form-factor; based on the mSATA (Mini-SATA) standard. Too small to hold a HDD. Other peripherals such as Wi-Fi cards may also make use of M.2 connectors.

    From Josh Covington's "NVMe vs. M.2 vs. SATA - What's the Difference?":
    M.2 is just the form factor. M.2 drives can come in SATA versions and NVMe versions, which describes the bus they use to electrically communicate with the other PC components. SATA M.2 SSD drives and 2.5" SATA SSDs actually operate at virtually identical spec. NVMe M.2's on the other hand, definitely do not ...

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    The dominant consumer form factor and physical connector for NVMe SSDs is M.2, specifically what is called 'M.2 2280' (the 2280 tells you the physical size). If you say 'NVMe SSD' with no qualification, many people will assume you are talking about an M.2 2280 NVMe SSD, or at least an M.2 22xx NVMe SSD.

    From ExplainingComputers' "M.2 SSD Adapters & Enclosures" (video):
    M.2 is a physical connector.
    The interface through it may be USB, SATA, PCIe, more.
    M.2 SSDs have either a SATA or PCIe interface.
    PCIe M.2 SSDs use a data transfer protocol called NVMe.



Electrical interfaces and protocols

+/-
  • AHCI SATA:

    Communicates through SATA controller before getting to CPU.

    Limited to 1 command queue and 32 commands per queue.

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    Traditional SATA SSDs are, well, SATA SSDs, in the usual 2.5" form factor and with the usual SATA edge connectors (which are the same for 2.5" and 3.5" drives). If you simply say 'SSD' today, most people will probably assume that you mean a SATA SSD, not a NVMe SSD. Certainly I will. If I want to be precise I should use 'SATA SSD', though. SATA comes in various speeds but today everyone will assume 6 Gbits/s SATA (SATA 3.x).


  • NVMe (Non-Volatile Memory Express):

    Can communicate directly to CPU.

    Up to 64K command queues and up to 64K commands per queue.

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    NVMe, also known as NVM Express, is the general standard for accessing non-volatile storage over PCIe (aka PCI Express). NVMe doesn't specify any particular drive form factor or way of physically connecting drives, but it does mean PCIe ...

    From Josh Covington's "NVMe vs. M.2 vs. SATA - What's the Difference?":
    NVMe ... developed to allow modern SSDs to operate at the read/write speeds their flash memory is capable of. Essentially, it allows flash memory to operate as an SSD directly through the PCIe interface rather than going through SATA and being limited by the slower SATA speeds.

    ...

    NVMe drives provide write speeds as high as 3500 MB/s [PCI Express Gen 3 bandwidth]. That's 7x over SATA 3 SSDs and as much as 35x over spinning HDDs!

    Every NVMe drive has an M.2 form-factor.

    Faster NVMe chips and drives coming in 2021 or later will support PCI Express Gen 4 bandwidth, maybe 5000 to 7000 MB/s.

    Figure out how many PCIe lanes the SSD (controller) is using:
    
    sudo lspci -vv | grep 'Non-Volatile'                   # get device number
    sudo lspci -vv -s NN:NN.N                              # see all info for device
    sudo lspci -vv -s NN:NN.N | grep -E 'LnkCap:|LnkSta:'  # see lane info
    # you could search for datasheet for NVMe controller chip to see total lanes
    

    Wikipedia's "NVM Express"



Christopher Harper's "NVMe vs M.2 vs SATA: Which is the best for your SSD?"
Anthony Spence's "NVMe vs SATA vs M.2 : What's the difference when it comes to SSDs?"



Note: USB flash drives really are not intended for heavy-duty use, and can heat up and/or fail.

Note: SD cards can be very unreliable, and can fail suddenly and catastrophically.



Note: using inappropriate options in /etc/fstab mount line (e.g. using Btrfs compress option on an ext4 filesystem) can make the mount fail and maybe fall back to mounting read-only.



Xiao Guoan's "How to Check Real USB Capacity in Linux Terminal"



Deliberately creating a damaged device, and more:
Michael Ablassmeier's "dd, bs= and why you should use conv=fsync"
dm-flakey



How the various tmpfs and other special filesystem mounts get created:
"sudo systemctl status systemd-tmpfiles-setup.service --full --lines 1000"
"man systemd-tmpfiles"



Find filenames that contain non-printing characters:
"LC_ALL=C find . -name '[![:print:]]' | cat >Non-ASCII.txt"
Not sure it works. Test with "touch abc$(printf '\017')def.txt" ?