Basics
Abstraction Types
- Virtual Machine:
A virtual machine has a complete copy of an operating system in it; the OS thinks it is running on bare metal, but actually it is not. The VM can provide emulated devices and networks, do access control, etc. Examples: VirtualBox, VMWare.
The operating system chosen could be a "full" Linux distro such as Ubuntu server, a very small distro such as Alpine or Chiselled Ubuntu, a cloud-focused Linux such as Bottlerocket, or a severely-stripped-down kernel-plus (unikernel or micro-VM) such as nanos or OSv. - Container:
Uses operating system mechanisms (such as chroot and namespaces and control groups) to keep things in one container separate from those in another container. All containers are running on the same OS. Examples: Docker, Snap, Flatpak. - Emulator:
An emulator has a veneer of a different operating system in it, but really is just mapping system-calls of the "inside" OS to system-calls of the real OS. Example: Wine (although it's more of a dynamic loader). - Bundling System:
A way of bundling one or more applications/services/processes together with the dependencies (libraries) they need. Examples: Snap, Flatpak, AppImage, Docker.
[I'm told that "container" is the wrong term for this, since they don't use the Linux container mechanism. I don't know what the right term is. I guess I'll call them "bundles". Popey called Snaps "confined software packages". Jesse Smith refers to Snap/Flatpak/Appimage/Docker as "portable packages".]
Each system has an "inside" OS API (inside each bundle), and then runs on top of a "base OS" (outside the bundles). In Docker, there is a fairly strong connection between the two; having a different OS inside and outside requires some "hacks" ? When you download a Docker image from the hub, you have to choose OS type and CPU architecture.
A bundle shares a single base OS with other bundles, mediated by the container framework/engine. Usually the base OS will be a LTS version of a bare-metal OS (often a stripped-down version of Alpine or Debian ?), but it could be a hypervisor or a VM (especially in the case of running in a cloud service, sharing a server with many other VMs or VPSs).
Mike Calizo's "6 container concepts you need to understand"
Ivan Velichko's "What Is a Standard Container"
Ivan Velichko's "Learning Containers From The Bottom Up"
Docker's "What is a Container?"
SoByte's "Principle of container technology (1)"
SoByte's "Dissecting Linux container implementation principles using Go"
Miona Aleksic's "Containerization vs. Virtualization : understand the differences"
Tytus Kurek's "What is virtualization? A beginner's guide"
Wikipedia's "List of Linux containers"
Eric Kahuha's "LXC vs Docker: Which Container Platform Is Right for You?"
EDUCBA's "Docker vs VMs"
Weaveworks' "A Practical Guide to Choosing between Docker Containers and VMs"
Mike Coleman's "So, when do you use a Container or VM?"
Bryan Cantrill's "Unikernels are unfit for production"
Debian Administrator's Handbook's "Virtualization"
Mike Royal's "Virtualization/Emulation Guide"
From someone on reddit:
Musl and glibc are two implementations of libc, as in the standard C library.
This is basically the standardized interface between the kernel and userland.
The kernel itself actually has no stable interface, the stability is guaranteed
by libc which is a bunch of C functions wrapped around binary system calls
that make it easy to write programs. So now you can do fork() instead of having
to manually copy numbers into registers to communicate to the kernel what you
want to do which is what the libc does for you amongst other things.
It also provides basic routine libraries like handling of strings and what-not.
...
The thing is that when applications are compiled, they are compiled against a specific libc for the most part and if you want to use them with another libc you have to recompile them. ...
It also provides basic routine libraries like handling of strings and what-not.
...
The thing is that when applications are compiled, they are compiled against a specific libc for the most part and if you want to use them with another libc you have to recompile them. ...
Cloud services
- Infrastructure as a Service (IaaS): For sysadmins or users.
Virtual devices or storage, such as:
- Block storage: appears to your local system as a block device,
which you can format as a filesystem.
- Object storage: image-hosting or file-hosting or JSON-hosting,
accessed through URLs ?
- Backup: file-storage, with versioning, and a client that
can do compression and incremental backup etc.
- Block storage: appears to your local system as a block device,
which you can format as a filesystem.
- Platform as a Service (PaaS): For developers. Virtual server, such as:
- Virtual Private Server (VPS): virtual server/machine onto which you
install a virtual machine image, which contains OS and tech stack (e.g.
LAMP = Linux+Apache+MySQL+PHP/Perl/Python,
MEAN = MongoDB/Express.js/AngularJS/Node.js, Ruby stack, Django stack, more) and application(s).
The applications could be packaged as containers.
- Specialized server: web site server, database server, VPN server, file server,
identity management (e.g. Active Directory), email server (e.g. MS Exchange),
virtual desktop, CRM, etc.
- Micro-service or serverless: server provides facility to run small loosely-coupled containers.
Developer doesn't have to manage VMs, doesn't care where services run. Services
usually communicate via message-passing and a RESTful API or GraphQL.
Wikipedia's "Microservices"
Wikipedia's "Serverless computing"
Altaro's "Containers vs. Serverless Architecture"
- Virtual Private Server (VPS): virtual server/machine onto which you
install a virtual machine image, which contains OS and tech stack (e.g.
LAMP = Linux+Apache+MySQL+PHP/Perl/Python,
MEAN = MongoDB/Express.js/AngularJS/Node.js, Ruby stack, Django stack, more) and application(s).
The applications could be packaged as containers.
- Software as a Service (SaaS): For end users. Complete locked-down service, such as
GMail, Google Docs, web site shared hosting, etc.
The key concept of bundles / VMs, especially for cloud
A container/bundle or VM doesn't contain any persistent data. Its definition (the "image") is a construct that contains code and a configuration for executing code. But then the execution will be transient.
There will be a transient state as it executes, but then when execution finishes (or crashes, or the session is abandoned by the user) the whole state will be discarded and the execution resources (CPU, RAM, IP address, etc) re-assigned.
The persistent data it operates upon is stored elsewhere, in a database or filesystem, usually across a network. article
Really, this is the same as the normal distinction between application and process. An application is the code, ready to execute, but contains no persistent data. A process is a transient execution, and operates on persistent data in a database or filesystem.
One issue: Who created the VM or container/bundle you're going to use ? Many of them are created not by the original app dev, but by some unknown helpful third party. How do you know you can trust that third party ?
A common saying: is container X cattle or a pet ?. It's "cattle" if you have lots of them, you spin each up, use it briefly, then kill it. It's a "pet" if you run it for a long time and it contains/stores data you value. So a micro-service would be "cattle" and your database server would be a "pet".
Generally a VM has persistent state (a filesystem) inside it ? And a container doesn't. When you shut down a VM and later boot it again, the state has been preserved and you're not booting from the originally-built pristine VM image.
Diagrams
Virtual machines
|
|
||||
Hypervisor: VMWare [^ API looks like bare metal: CPU, RAM, devices, etc ^] | App #5 | App #6 | |||
Native OS: Linux [^ API is libc ^] | |||||
Bare metal [^ CPU, RAM, devices, etc ^] |
Emulators
|
|
App #3 | App #4 | ||
Native OS: Linux [^ API is libc ^] | |||||
Bare metal [^ CPU, RAM, devices, etc ^] |
Docker
Docker container #1 | Docker container #2 | ||
Docker Engine [^ API is libc ^] | Database #3 | App #4 | |
Native OS: Linux [^ API is libc ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
Docker container #1 | Docker container #2 | ||
Docker Engine (includes a Linux VM) [^ API is libc ^] | Database #3 | App #4 | |
Native OS: macOS [^ API is libc ? ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
[Normal situation on Windows]
Docker container #1 | Docker container #2 | ||
Docker Engine (includes a Linux VM) [^ API is libc ^] | Database #3 | App #4 | |
Native OS: Windows [^ API is win32 ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
[Unusual case: Docker Enterprise on Windows]
Docker container #1 | Docker container #2 | ||
Docker Engine [^ API is win32 ^] | Database #3 | App #4 | |
Native OS: Windows with Hyper-V (Docker Enterprise) [^ API is win32 ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
Flatpaks
Flatpak app #1 (on freedesktop) (on bubblewrap) |
Flatpak app #2 (on GNOME) (on bubblewrap) |
Flatpak app #3 (on KDE) (on bubblewrap) |
App #4 |
Native OS: Linux [^ API is libc ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
Snaps
Snapd | Snap app #1 (on AppArmor) |
Snap app #2 (on AppArmor) |
App #3 | App #4 |
Native OS: Linux [^ API is libc ^] | ||||
Bare metal [^ CPU, RAM, devices, etc ^] |
AppImages
AppImage app #1 | AppImage app #2 | App #3 | App #4 |
Native OS: Linux [^ API is libc ^] | |||
Bare metal [^ CPU, RAM, devices, etc ^] |
The key concept of container/bundle / VM layers
There is a foundation layer with a fixed API.
For a container/bundle, it is an API provided by the container/bundle daemon / framework, which usually is the libc API for some LTS release of a Linux distro, such as Ubuntu 18.04 LTS.
For a VM, it is the API/devices/memory-map of a standard Intel PC.
Erika Caoili's "Linux Basics: Static Libraries vs. Dynamic Libraries"
package "apt show libc6"; library "locate libc.so" at /usr/lib/x86_64-linux-gnu/libc.so and /snap/gimp/273/usr/lib/x86_64-linux-gnu/libc.so etc. There also is /snap/gimp/273/usr/lib/x86_64-linux-gnu/libsnappy.so
Hypervisor
Also called a "Type 1 Hypervisor", "Bare-Metal Hypervisor", or "paravirtualization".
A thin abstraction layer between the hardware and the VMs. This acts as a referee that controls access to hardware from the VMs.
ResellerClub's "Type 1 and Type 2 Hypervisors: What Makes Them Different"
Kelsey Taylor's "10 Best Open Source Hypervisor"
IBM's "Hypervisors"
From Korbin Brown article:
KVM is the type 1 hypervisor built into every Linux kernel since version 2.6.20. QEMU and VirtualBox are type 2 hypervisors that can utilize KVM (or other type 1 hypervisors, in the case of VirtualBox) to allow virtual machines direct access to your system's hardware.
QEMU is short for Quick EMUlator. As its name implies, it can be used as an emulator, but also as a hypervisor.
From Teknikal's_Domain article:
Type 1 hypervisors are, usually, entire operating systems, but the defining fact is that they run directly on top of the physical hardware, which means they have direct access to hardware devices, and can result in better performance just by not having to deal with anything else except it's own tasks. As a good example, VMWare ESXi is a Type 1 hypervisor.
Type 2 are closer to a conventional software package running as a standard process on the host hardware's OS. While they're a bit easier to deal with and usually easier to play with, competing with other programs and abstracted hardware access can create a performance impact. QEMU and VirtualBox are two good examples here.
Note that the lines here are kinda blurred, for example, Hyper-V is a Windows service that communicates at a bit lower of a level, meaning it has characteristics of both type 1 and type 2, and KVM for Linux uses the running kernel to provide virtualization, effectively acting like type 1, despite otherwise bearing all the classifications of type 2.
- Xen (originally from Citrix ?):
From Debian Administrator's Handbook's "Virtualization":
Xen is a "paravirtualization" solution. It introduces a thin abstraction layer, called a "hypervisor", between the hardware and the upper systems; this acts as a referee that controls access to hardware from the virtual machines. However, it only handles a few of the instructions, the rest is directly executed by the hardware on behalf of the systems. The main advantage is that performances are not degraded, and systems run close to native speed; the drawback is that the kernels of the operating systems one wishes to use on a Xen hypervisor need to be adapted to run on Xen.
Let's spend some time on terms. The hypervisor is the lowest layer, which runs directly on the hardware, even below the kernel. This hypervisor can split the rest of the software across several domains, which can be seen as so many virtual machines. One of these domains (the first one that gets started) is known as dom0, and has a special role, since only this domain can control the hypervisor and the execution of other domains. These other domains are known as domU. In other words, and from a user point of view, the dom0 matches the "host" of other virtualization systems, while a domU can be seen as a "guest".
...
Xen requires modifications to all the operating systems one wants to run on it; not all kernels have the same level of maturity in this regard. Many are fully-functional, both as dom0 and domU: Linux 3.0 and later, NetBSD 4.0 and later, and OpenSolaris. Others only work as a domU. ...
Xen Project
Xen Project Software Overview
Looks like it can be installed through Mint's Software Manager, but see warning comment under "xen-system-amd64": it interferes with normal system booting ?
XCP-ng (open-source hypervisor)
- VMware ESXi:
- Microsoft Hyper-V:
Surender Kumar's "Top 6 Hyper-V management tools"
Virtual Machine
Also called a "Type 2 Hypervisor", or "Hosted Hypervisor". Runs on top of a host OS.
A virtual machine has a complete copy of an operating system in it; a container shares a single underlying OS with other containers, mediated by the container framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a layer that translates OS syscalls to syscalls of a different OS.
I think there are VM providers and then VM managers ? Each provider also includes a manager, but there are managers that can control many kinds of providers ? Not sure.
Bobby Borisov's "Virtualization on PC, Explained for Beginners with Practical Use Cases"
ResellerClub's "Type 1 and Type 2 Hypervisors: What Makes Them Different"
Kelsey Taylor's "10 Best Open Source Hypervisor"
IBM's "Hypervisors"
Providers
- VirtualBox:
Oracle VM VirtualBox
Wikipedia's "VirtualBox"
Available through Mint's Software Manager, but it's an older version.
Oracle VM VirtualBox - User Manual
OSBoxes' "VirtualBox Images"
VirtualBox has these options for virtual network connections:- Not attached: network card but no connection.
- NAT: VM traffic to LAN uses host's IP address.
- NAT Network: private network on host machine, VM traffic to LAN uses host's IP address.
- Bridged networking: VM uses its own network stack, connects to LAN, gets its own IP address, other machines on LAN can see it.
- Internal networking: private network among VMs, no connection to outside.
- Host-only networking: private network among VMs and host machine, no connection to outside.
- Generic networking: strange modes.
6.2. Introduction to Networking Modes
VirtualBox 'clone' vs 'snapshot' vs 'copy':From various places:
Clone:
Duplicates your VM. It can be a linked clone referenced to your original VM via a snapshot, or it can be a full clone.
Creates a "really close" to the original copy. By really close I mean identical on the surface, but under the hood the UUIDs of the HDs and the VM change, the MAC addresses of the network cards change. If the OS you're cloning depends on one of these features (Windows for activation, GRUB for booting, etc.) then you might have problems down the road. The reason they change? So that you can run the original and the clone (simultaneously) in VirtualBox. And if there's one thing that VirtualBox hates it is duplicate UUIDs.
Snapshot:
A kind of restoration point. When you create a snapshot, VirtualBox will start saving new writes to the virtual disk to a differential image. When reading it will take into account the differential image as well as the drive. When you restore a snapshot you're basically telling VirtualBox to ignore the snapshots that don't lead to the specified restoration point.
They are really a point in time.
Export:
A packaged archive that contains your VM's hard drive image and configuration.
Copy the VM's folder tree, in the file manager:
This is a true backup of the guest and the only one qualifying as such. Copy the whole folder and not just the VDIs. Now, since the UUIDs (and everything else) is the same as the original, if you try to register (Add) and run the copy in VirtualBox, you'll get a big, fat failure. You have to unregister (and maybe delete?) the original and then register and run the copy.
Restoring a copy-style backup is as simple as just re-copying the files back to the original place they were. If you still have the bunged-up original, you just copy the files back over the original, don't re-register the guest, just start it up. There should be no errors.
If you don't have the original guest, or care to unregister (Remove) the original guest, then you're free to put the restored guest anywhere you want then re-register (Add) it and start it.
David Both's "Convert your Windows install into a VM on Linux"
- VMware:
Wikipedia's "VMware"
Mint's Software Manager seems to have some utilities for VMware, but not VMware itself.
Mini vLab's "Hello, ESXi: An Intro to Virtualization (Part 1)"
I've heard that VMWare is better than VirtualBox for heavy graphics use.
VMware has three options for virtual network connections: bridged, NAT, and host-only.- Bridged: VM connects to LAN, gets its own IP address.
- NAT: private network on host machine, VM traffic to LAN uses host's IP address.
- Host-only: private network on host machine, VM not allowed to do traffic to LAN.
- KVM and Qemu-kvm:
KVM (Kernel-based Virtual Machine): a kernel module providing most of the infrastructure that can be used by a virtualizer. Actual control for the virtualization is handled by a QEMU-based application. qemu-kvm only provides an executable able to start a virtual machine. libvirt allows managing virtual machines in a uniform way. Then virtual-manager is a graphical interface that uses libvirt to create and manage virtual machines.
To restate: KVM is kernel-based virtual machine, QEMU is quick-emulator (next level up), then libvirt to orchestrate everything, virtual-manager (or GNOME Boxes) is GUI.
Wikipedia's "Kernel-based Virtual Machine"
Wikipedia's "QEMU"
QEMU
ArchWiki's "KVM"
ArchWiki's "QEMU"
Quickemu
Mauro Gaspari's "Getting Started With KVM Hypervisor, Virtual Machines"
Alistair Ross's "How to setup a KVM server the fast way"
Carla Schroder's "Creating Virtual Machines in KVM: Part 1"
Linuxize's "How to Install Kvm on Ubuntu 20.04"
Alan Pope's "GNOME OS 40 without GNOME Boxes"
Chris Titus's "Macos on Linux"
I've heard that KVM really was designed for headless operation, but can do more.
From Debian Administrator's Handbook's "Virtualization":
KVM, which stands for Kernel-based Virtual Machine, is first and foremost a kernel module providing most of the infrastructure that can be used by a virtualizer, but it is not a virtualizer by itself. Actual control for the virtualization is handled by a QEMU-based application.
Unlike other virtualization systems, KVM was merged into the Linux kernel right from the start. Its developers chose to take advantage of the processor instruction sets dedicated to virtualization (Intel-VT and AMD-V), which keeps KVM lightweight, elegant and not resource-hungry. The counterpart, of course, is that KVM doesn't work on any computer but only on those with appropriate processors. For x86-based computers, you can verify that you have such a processor by looking for "vmx" or "svm" in the CPU flags listed in /proc/cpuinfo.
With Red Hat actively supporting its development, KVM has more or less become the reference for Linux virtualization.
From "Linux Bible" by Christopher Negus:
KVM is the basic kernel technology that allows virtual machines to interact with the Linux Kernel.
QEMU Processor Emulator: One qemu process runs for each active virtual machine on the system. QEMU provides features that make it appear to each virtual machine as though it is running on physical hardware.
Libvirt Service Daemon (libvirtd): A single libvirtd service runs on each hypervisor. The libvirtd daemon listens for requests to start, stop, pause, and otherwise manage virtual machines on a hypervisor.
The Virtual Machine Manager (virt-manager) is a GUI tool for managing virtual machines. Besides letting you request to start and stop virtual machines, virt-manager lets you install, configure, and manage VMs in different ways.
The virt-viewer command launches a virtual machine console window on your desktop.
From Felix Wilhelm article:
... In contrast to the other big open-source hypervisor Xen, KVM is deeply integrated with the Linux kernel and builds on its scheduling, memory management, and hardware integrations to provide efficient virtualization.
KVM is implemented as one or more kernel modules (kvm.ko plus kvm-intel.ko or kvm-amd.ko on x86) that expose a low-level IOCTL-based API to user-space processes over the /dev/kvm device. Using this API, a user-space process (often called VMM for Virtual Machine Manager) can create new VMs, assign vCPUs and memory, and intercept memory or IO accesses to provide access to emulated or virtualization-aware hardware devices. QEMU has been the standard user space choice for KVM-based virtualization for a long time, but in the last few years alternatives such as LKVM, crosvm, and Firecracker have started to become popular.
While KVM's reliance on a separate user-space component might seem complicated at first, it has a very nice benefit: Each VM running on a KVM host has a 1:1 mapping to a Linux process, making it manageable using standard Linux tools.
This means for example, that a guest's memory can be inspected by dumping the allocated memory of its user-space process or that resource limits for CPU time and memory can be applied easily. Additionally, KVM can offload most work related to device emulation to the user-space component. Outside of a couple of performance-sensitive devices related to interrupt handling, all of the complex low-level code for providing virtual disk, network or GPU access can be implemented in user-space.
From someone on reddit: "QEMU is L2 hypervisor and KVM is L1, which makes it a lot faster. QEMU works in the user-space, and KVM is a kernel module."
From someone on reddit: "KVM-based solutions seem to need quite a lot of fiddling for the initial setup. KVM-based VMs also lack ease-of-use features like folder-sharing or USB-passthrough."
From post by Ryan Jacobs: "libvirt+kvm is wayyyyy better than VirtualBox. My Android VM is incredibly snappy now. The mouse integration is better too."
Quickemu to run Win11 in a VM (3/2022):# Install quickemu via package manager. quickget windows 11 # 5.8 GB into $HOME/windows-11 # I edited $HOME/windows-11.conf to TPM=on and SecureBoot=off quickemu --vm windows-11.conf # As soon as window appears, press Enter. May have to do it again. # See blue "Setting up" screen. # See white "Windows Setup" dialog. # Two mouse cursors, and "real" one is hitting edge of display # (maybe happened only when VM was running firmware). # Tab to fields, use arrow keys to change pull-down values. # "Language to Install" can't be changed from "English - UK". # Tab to Next button and press Enter. # VM will access about 20 Microsoft URLs as it installs. # When it tries to make you create a MS account, select # "sign-in options" and then "offline account". # Alt-Tab in Linux doesn't work in beginning, worked later. # To escape VM, move mouse to bottom and select an app in tray. # Couldn't find any graceful way to shut down Win11 ! # Opened a terminal and did "shutdown /s". # Now 17 GB under $HOME/windows-11 ! # Ran it again, came up into Win11 right away, changes are persistent.
- Bochs:
A PC emulator that emulates Intel CPU, common I/O devices, and a custom BIOS.
Bochs
WSL2 on Windows is a VM: you run a Linux kernel inside it.
Apparently WSL2 runs the same Linux kernel no matter what "distro" type you pick.
Joey Sneddon's "How to Install WSL 2 on Windows 10"
mikeroyal / WSL-Guide
Chromebook:
Crostini (AKA "embedded Linux (beta)") on Chromebook is a VM: you run a Linux kernel inside it.
From /u/rolfpal on reddit 9/2019:
Crostini is the same as the embedded Linux (beta). It runs an instance of Linux in a container,
the container is "sandboxed", the beta comes with tools allowing you to run pretty much anything
in the Debian distro. It does support GPU acceleration, but you have to set it up. Crostini is an
official project of Google and is a work in progress.
Crouton is an un-official script that allows you to run an instance of Linux in "chroot", meaning it uses the Linux kernel of Chrome as the foundation for the distro of your choice. Crouton is more of a hack, and is suspect from a security point of view, but sometimes you can do more with it, particularly if hardware hasn't been activated in Crostini yet.
From /u/LuciusAsinus on reddit 9/2019:
Crouton is an un-official script that allows you to run an instance of Linux in "chroot", meaning it uses the Linux kernel of Chrome as the foundation for the distro of your choice. Crouton is more of a hack, and is suspect from a security point of view, but sometimes you can do more with it, particularly if hardware hasn't been activated in Crostini yet.
Crostini can be run by a supported Chromebook as-is. Crouton requires you to put your computer
into "developer mode", which is less secure, and requires a dangerous prompt whenever you
reboot (dangerous in the sense that it says, essentially, "Something has gone horribly wrong,
hit space NOW to make it all better", but if you DO hit space you wipe your computer,
including your Linux partition. I lost my Linux 3 times when my kids used my computer;
very pleased that Crostini doesn't have that problem, even if it's a bit less powerful than Crouton).
Crostini: Don Watkins' "Run Linux apps on your Chromebook"Managers
- GNOME Boxes:
GNOME Wiki's "Boxes"
LinuxAndUbuntu's "Walkthrough On How To Use GNOME Boxes"
Daniel Aleksandersen's "GNOME Boxes review: no-frills and no-thrills desktop virtualization"
- Vagrant:
VMs are provisioned on top of VirtualBox, VMware, AWS, or any other provider.
HashiCorp's Vagrant
SW Test Academy's "Vagrant - Virtual Machine Manager"
Alexander V. Leonov's "Deploying VirtualBox virtual machines with Vagrant"
Sarath Pillai's "What is Vagrant and How does it work"
Andy Mott's "Installing and running Vagrant using qemu-kvm"
Vagrant is CLI-only, seems a bit cumbersome with editing of config files. Its main advantage seems to libraries of VM images, but you can get those for VirtualBox and VMWare at least from OSBoxes.
- Cockpit / cockpit-machines:
Cockpit
Peter Boy's "Reconfiguring virtual machines with Cockpit"
Seth Kenlon's "How to manage virtual machines in Cockpit"
- virt-manager:
virt-manager
Cassowary (use virt-manager to run Windows in VM on Linux)
- Multipass"
Linux Shell Tips' "Multipass - Run Ubuntu VMs on Demand for Any Linux System"
- Quickemu"
Quickemu
- OpenStack ?
OpenStack
For deploying VMs to cloud only, not locally ? There's a server part which lets a corporation implement their own cloud (and sell cloud services), then client/management part that lets users use it ?
Bryant Son's "6 open source virtualization technologies to know in 2020"
da667's "Resources for Building Virtual Machine Labs Live Training"
SK's "How To Check If A Linux System Is Physical Or Virtual Machine"
SK's "OSBoxes - Free Unix/Linux Virtual machines for VMWare and VirtualBox"
How to tell if you're running in a VM, and which one:
"hostnamectl | grep -i virtualization"
"systemd-detect-virt"
"sudo virt-what"
"sudo dmidecode -s system-product-name"
Emulator
A virtual machine has a complete copy of an operating system in it; a container shares a single underlying OS with other containers, mediated by the container framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a layer that translates OS syscalls to syscalls of a different OS.
- Windows Emulators:
- Wine:
"Essentially, Wine is a 'dynamic loader' for Windows executables."
from Reboot and Shine's "How Wine works 101"
The CPU runs the instructions of the Windows executable.
Install wine-stable (535 MB) through Mint's Software Manager.
WineHQ
Wine Application Database (AppDB)
Xiao Guoan's "2 Ways to Install Wine on Linux Mint 19.1"
VITUX's "How to install and use Wine for Running Windows Programs on Ubuntu"
SavvyNik's "How to Run Windows Programs on Linux using Wine" (video)
ArchWiki's "Wine"
Reboot and Shine's "How Wine works 101"
It seems that removing Wine cleanly can be difficult; there can be a whole tree of dependencies. Record how you installed it, and try reversing that.
From /u/iLynux on reddit 1/2019:
"I have not had very much success with Wine, and I find that every program with a forms-based GUI looks ugly as hell and doesn't integrate into my user experience. So I avoid it. I would rather run an entire VM for Windows things."
I need to have Wine installed to run a particular build step of Electron. I want to disable it at all other times. I tried moving ~/.wine to ~/.wine-saved, but then electron-builder just re-installed it ! Is there a way to turn off Wine and keep it off until I turn it back on ?
- Proton / Steam Runtime:
From Valve, the game company. Steam runs on top of Proton.
"Proton is a compatibility layer for Microsoft Windows games to run on Linux"
From someone on reddit:
"Proton is based on wine. Wine is more for running windows applications and Proton is more for running games not natively supported on Linux."
- JSLinux:
Runs in a browser.
- bochs:
- bhyve:
- Wine:
- Game/app manager on top of emulator:
- Steam:
Proprietary store for games, by Valve. With "the Steam Runtime to make single Linux binaries run pretty much on any distro."
Ekianjo's "OpenTTD Went to Steam to Solve a Hard Problem"
- PlayOnLinux:
Uses Wine as a foundation, but simplifies finding and installing Windows installation files.
PlayOnLinux
VITUX's "How to Install and Use Windows Applications on Linux using PlayOnLinux"
SavvyNik's "How to Run Windows Programs on Linux using Wine" (video)
- CrossOver:
Mehedi Hasan's "CrossOver Linux: An Ultimate Guide To Run Windows Programs and Games on Linux"
"It's just Wine with wrappers, wizards and pre-tested 'known working' prefixes."
- Lutris:
Korbin Brown's "Install Lutris On Ubuntu 22.04"
Wikipedia's "Lutris"
ceos3c's "Gaming on Linux: Install Lutris on Linux Mint 19"
- Bottles:
Sk's "How To Run Windows Software On Linux With Bottles"
Aadesh's "Bottles - Easily install Windows apps on Linux"
- Steam:
- Android Emulators:
The choices seem to be:
- Boot as a native OS on your PC hardware, instead of Linux:
- Run inside VirtualBox:
- GenyMotion (also requires internet connection; free for trial only, but see free personal edition)
- Andro VM
- Android-x86 (can run offline)
- Run inside Wine:
- Jar of Beans (development stopped)
- Run inside the official Android Studio:
- Android SDK
From people on reddit:Use JetBrains Toolbox to install Android Studio. You will need to start a new project to complete the configuration, just click next until the IDE has finished loading. Open the AVD manager (Android Virtual Device) Tools / AVD Manager. You can find more detailed documentation: managing-avds. ... If you want Google Play, you can install GApps if it's not included in the image.
- Android SDK
- Run inside a container:
- Waydroid (requires Wayland)
(Switched to Linux video)
- Waydroid (requires Wayland)
- Run as a Linux app:
- Run inside the Chrome browser:
- ARChon (failed, see below)
Yuri's "7 Best Android Emulators to Run Android Apps On Linux"
Mehedi Hasan's "Top 10 Android Emulators for Linux To Enjoy Android Apps in Linux"
Android Developers' "Run apps on the Android Emulator"
Android Developers' "Android Debug Bridge (adb)"
From people on reddit 3/2020: "GenyMotion. But if you're a dev, you want to choose Android Studio."
Tried Archon:
Went to ARChon. Downloaded Archon ZIP file, extracted it, ran Chromium 73 browser on Linux Mint 19.1, went to chrome://extensions, enabled "Developer Mode", clicked on "Load unpacked" button, selected the folder created by extracting from the ZIP file, extension "ARChon Custom Runtime 2.1.0..." appeared in list of Extensions. But there is a red "Errors" button.
Downloaded Sample App, extracted it, did "Load unpacked" on it, appeared in list of Chrome Apps. But the "Errors" button is red ("Unrecognized manifest key 'arc_metadata'"), and there's no "Launch" button.
At CLI, did "sudo npm install chromeos-apk -g". Then "chromeos-apk YOURAPKFILENAME". (Actually, I did "/usr/local/lib/nodejs/node-v10.15.3-linux-x64/bin/chromeos-apk YOURAPKFILENAME".) Worked.
Did "Load unpacked" on newly-unpacked APK. Got "Failed to load extension / There is no "message" element for key extName. / Could not load manifest."
Gave up on Archon.
Tried Anbox:
Went to Anbox.sudo add-apt-repository ppa:morphis/anbox-support sudo apt update sudo apt install linux-headers-generic anbox-modules-dkms sudo modprobe ashmem_linux sudo modprobe binder_linux ls -1 /dev/{ashmem,binder} sudo snap install --devmode --beta anbox snap info anbox sudo apt install adb anbox.appmgr # or: click on Start, run Anbox Application Manager # see emulated Android screen with 10 default apps # go into Settings / Security and enable Unknown Sources # turn off VPN # go to Settings / About emulated device # click on Build number seven times # go to previous menu and see new Developer options item # click on Developer options adb devices -l # see emulated device listed adb install path/to/my-app.apk # get Success # in emulated Android screen, see icon for newly installed app # run the app # I don't see any Wi-Fi or cellular networks in Settings # I need to set a VPN or proxy, can't do it # update to latest version: snap refresh --devmode --edge anbox # if there's a bug: sudo snap set anbox debug.enable=true # run Anbox and the app sudo /snap/bin/anbox.collect-bug-info anbox system-info # to set a proxy: adb shell settings put global http_proxy IPADDR:PORTNUM # to remove the proxy: adb shell settings delete global http_proxy adb shell settings delete global global_http_proxy_host adb shell settings delete global global_http_proxy_port # so to use with intercepting proxy: # launch owasp-zap or Burp adb shell settings put global http_proxy localhost:8080 # ZAP # or adb shell settings put global http_proxy 127.0.0.1:8080 # Burp anbox.appmgr # couldn't get that to work, internet access gives proxy error # if something gets stuck, usually have to reboot Linux, but could try: # kill all anbox related tasks running, then: killall anbox sudo systemctl restart anbox-container-manager.service adb kill-server # to uninstall Anbox but leave adb: sudo snap remove anbox sudo modprobe -r ashmem_linux sudo modprobe -r binder_linux sudo apt remove anbox-modules-dkms sudo apt-add-repository --remove ppa:morphis/anbox-support sudo apt update # to see if only Anbox was using Snap, and Snap can be removed: sudo snap list # if only "core" is installed, you can remove Snap entirely: sudo apt purge --auto-remove snapd # an alarming amount of stuff gets removed, but the system is okay
- Boot as a native OS on your PC hardware, instead of Linux:
Container/Bundle Systems Overview
A virtual machine has a complete copy of an operating system in it; a container/bundle shares a single underlying OS with other containers/bundles, mediated by the container/bundle framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a layer that translates OS syscalls to syscalls of a different OS.
Levels of the container architecture
- Specification file: Dockerfile, snapcraft.yaml, flatpak manifest, distrobuilder YAML file (LXC), lxc.conf, ...
- Parts: usually packages from OS repo, files, directory structure.
- Builder: command: docker build or docker-compose build, snapcraft,
flatpak-builder, distrobuilder (LXC), lxc-create, ...
- Container image: result of the build.
- Manager: command or application: docker, snap, flatpak, containerd,
Portainer, LXD, ... Sometimes one manager
can run on top of a lower-level manager (e.g. docker on top of dockerd which is on top of containerd).
- Orchestrator: mostly having to do with clustering or deployment: Kubernetes, Docker Swarm, Nomad, AWS ECS, ...
- Container: running image.
- Container engine: LXC, runc, ... Sometimes one engine can run on top of a lower-level engine.
- OS facilities: namespaces, cgroups, ...
- Host OS: usually Linux.
- VM: ...
- Hypervisor: Hyper-V, ESXi, ...
- Machine: bare metal, or VPS on some cloud.
Ivan Velichko's "Learning Containers From The Bottom Up"
Ivan Velichko's "How Container Networking Works: Practical Explanation"
Tomas Tulka's "Linux-based Containers under the Hood"
Annwan's "The What, Why and How of Containers"
Alistair Ross's "What is Docker (and Linux containers)?"
Wikipedia's "Linux containers"
Opensource.com's "What are Linux containers?"
Merlijn Sebrechts' "Why Linux desktop apps need containers"
Ingo Molnar's "What ails the Linux desktop" (2012)
Romaric Philogene's "How To Use Buildpacks To Run Containers"
OS building blocks
Containers on Linux generally use chroot, filesystem mounting, namespaces, cgroups, seccomp, and maybe SELinux to provision and confine the app and strip services from its environment.
Nived V's "4 Linux technologies fundamental to containers"
Eric Chiang's "Containers from Scratch"
Adam Gordon Bell's "Containers are chroot with a Marketing Budget"
man namespaces
man cgroups
man seccomp
Some comparisons (focusing mainly on differences)
- Docker: intended for server; multi-app; sandboxed; cross-platform; needs installed foundation.
- Snap: for desktop GUI and server and CLI and IoT; single-app; sandboxed; Linux-only; needs installed foundation.
- Flatpak: desktop-GUI-only; single-app; sandboxed; Linux-only; needs installed foundation.
- AppImage: for desktop GUI and server and CLI and IoT; single-app; not sandboxed; Linux-only; no installed foundation.
- Native binaries: for desktop GUI and server and CLI and IoT; multi-app; not sandboxed; Linux-only; no special installed foundation.
Some issues that containers/bundles could/do solve
- Dependencies:
From /u/lutusp on reddit 4/2020:
[They] solve the "dependency hell" issue by packaging all required dependencies with the application itself in a separate environment. This solves an increasingly serious problem (inability to install and run some applications) with another one -- an application's download and storage size and startup time goes up.
By contrast, an application installed from the normal repositories must find all its dependencies (right version and properties) in the installed libraries, which unfortunately is a declining prospect in modern times.
Note: This works in reverse, too: I once did a "sudo apt remove" of some packages, which unexpectedly removed my whole desktop with them ! I quickly re-installed the desktop packages. But the potential damage was a bit limited by the fact that several apps important to me (including password manager) are running as snaps, and a couple more (including Firefox) are running as flatpaks. - Separate the app from the distro:
App updates independent from system updates, if user wishes. E.g. you could use a LTS system/distro while doing rolling updates of snap apps.
Merlijn Sebrechts' "Why Linux desktop apps need containers"
Shift burden of packaging work from many distro packagers / repo maintainers to one app packager/dev. Especially valuable for large and frequently-updated apps such as browsers, and large app suites such as Office suites.
More direct connection between users and app developers. No longer a distro builder/maintainer between them. - Single source (store or hub) for software:
(Although that can be bypassed if you wish.)
Using multiple repos and PPAs is insecure, has lots of duplication, and is confusing to some new users.
Many new users are familiar with app/extension Stores in Android, Apple, Chrome, Firefox, Burp Suite, VS Code, GNOME desktop, Thunderbird, more. - Per-app permission model:
Many new users are familiar with an Android or iPhone model where they can set permissions per-app.
Michal Gorny's "The modern packager's security nightmare"
From comment on lobste.rs 1/2022:
[OCI] Containers are a mix of three things:
- A reproduceable build system with managed dependencies and caching of intermediate steps.
- A distribution and deployment format for self-contained units.
- An isolation mechanism.
My cautions about app containers/bundles (Snap, Flatpak):
App containers/bundles (Snap, Flatpak) are a good idea, but the current implementations are lacking:
- Many containers/bundles have bugs with directory access or launching helper apps.
- I wish there was a requirement that only the original dev of an app could make a container/bundle
of it. How to know if I should trust some unknown "helpful" person who made a
container/bundle for a popular app ?
- Flatpak has a surprising and bad permission structure involving "portals",
and apparently snap is adopting it too.
From someone on reddit 9/2022:
"Important to note that Chrome and chromium-based browsers are much less secure when running in a Flatpak or Snap, as the Chrome sandbox is disabled."
Docker
Basics
Intended for server; multi-app; sandboxed; cross-platform; needs installed foundation.
Docker seems to be mostly for server applications that the user connects to through a web browser, or that other apps connect to through some web API such as a RESTful API. But it IS possible to run a normal GUI app in a Docker container, by connecting from app to the X display system in the base system: article1, article2
One difference between Docker and Snap/Flatpak/Appimage: you can run a whole collection of apps/services in one Docker container, with layers and IPC etc. The others generally are single-application (it could launch child processes, but I think they'd be outside the containment).
- "Docker Hub" is a repo of images, but anyone can push to it, so no security guarantees, and many images have no descriptions at all. Better to use Official Images. Also there's LinuxServer.io.
- An "image" is a static file in the hub or installed into your system.
- A "container" is a running instance of an image.
- A "container ID" is a unique ID that identifies a running container.
- A "swarm" is a cluster of running Docker Engines (probably spread across multiple hosts that can be managed together.
Note: Docker Hub and docker.com seem allergic to Privacy Badger or Facebook Container or Firefox, not sure. I have to use a Chrom* browser to access them.
From Teknikal's_Domain article:
Docker containers are really meant to be tight little self-contained boxes meant to do one thing and one thing only. If your app needs a web server and a database, make two containers, one for each, and link them together into their own little isolated network. In this sense a Docker container is really just running any regular command inside an isolated space where it can't interact with anything other than what it's given and what's been explicitly allowed.
Docker uses a tiny little bit of a runtime, containerd, that makes a slight bit of an abstraction layer. Each container is formed from an image, which is a filesystem and some extra configuration data. That filesystem is a series of layers, each representing modifications (deltas) to the previous. Each container also has an entrypoint, an executable program in the container namespace to use as process 1. This can be a shell like /bin/bash, but it can also be an app wrapper that does nothing else except start the service. The two main ways a container can interact with the outside world are through volumes and ports.
A volume is a named location in the container filesystem that the host's filesystem can be mounted at, either as a raw path, or a named volume managed by Docker. For example, to give a container access to the Docker runtime, you can map the host's /var/run/docker.sock to the container's /var/run/docker.sock.
A port is a network port that the container image has stated it expects traffic on. An image for a web server might, say, call out ports 80/tcp and 443/tcp as ones that it's going to use. These can be mapped to any available host port (through some Linux networking magic), but generally are mapped into the ephemeral port range of 32768-60999 (at least for Linux).
Docker
Eric Kahuha's "LXC vs Docker: Which Container Platform Is Right for You?"
Wikipedia's "Docker"
Mayank Pandey's "Beginners Guide to Container Security"
Ivan Velichko's "Cracking the Docker CLI"
Images and getting started
Alahira Jeffrey Calvin's "From Skeptic to Believer: My Docker Story"
Radek Ostrowski's "Getting Started with Docker: Simplifying DevOps"
Docker Hub's "hello-world" image
To see DockerHub page for an image, go to "https://hub.docker.com/r/IMAGENAME". But many of the pages don't give basic info such as description, version number.
SK's "Getting Started With Docker"
Mauro Gaspari's "Getting Started With Docker Containers: Beginners Guide"
Bobby Borisov's "Docker Logs"
Piotr's "The Containerized Software Development Guide"
Ricard Bejarano's "How to write great container images"
Practical DevSecOps' "Understand Docker from a security perspective"
SK's "How To Analyze And Explore The Contents Of Docker Images"
Jessica G's "Digging into Docker layers"
SK's "Portainer - An Easiest Way To Manage Docker"
SK's "ctop - A Commandline Monitoring Tool For Linux Containers"
Magesh Maruthamuthu's "Portainer - A Simple Docker Management GUI"
Services you might want to run in containers:
awesome-selfhosted
Christopher Tozzi's "Docker: Not Just for Linux Anymore. Or Is It?"
Microsoft's "What is Docker?"
Nick Janetakis' "The 3 Biggest Wins When Using Alpine as a Base Docker Image"
/r/docker
You can have Docker containers hosted on some cloud service for free if usage (and RAM size) is low: Google Cloud Run, micro compute ?
Details
Ubuntu 20.04 has a snap for Docker, but no deb/apt package.
Most articles recommend installing straight from the Docker site, not distro repo's, which usually are a bit outdated.
If you want to install a deb, you'll have to use a PPA:
Linuxize's "How to Install Docker on Ubuntu 20.04"
Bobbin Zachariah's "How to Install Docker on Ubuntu 20.04"
Docker's "Install Docker Engine on CentOS"
On Mint, maybe "sudo apt install docker", "man docker", "man docker-run", "docker help", "sudo docker info", "sudo docker images", "sudo docker ps".
Apparently there are multiple versions of Docker: docker (old), docker-engine (old), docker.io, docker-ee (Enterprise Edition), docker-ce (Community Edition).
For Mint 19, use name "Bionic" anywhere you see "$(lsb_release -cs)" and follow instructions in Docker Docs' "Get Docker CE for Ubuntu"
Also Tenbulls' "Installing Docker on Linux Mint"
Installed Docker-CE on Mint 19.1:
# get rid of some stuff from previous attempts
sudo apt remove docker docker-engine docker.io containerd runc
sudo rm -rf /var/lib/docker
sudo rm /etc/docker/key.json
# reboot for good measure
sudo apt update
# Install packages to allow apt to use a repository over HTTPS:
sudo apt install apt-transport-https ca-certificates curl software-properties-common
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Verify that you now have the key
sudo apt-key fingerprint 0EBFCD88
# add stable repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
# install docker CE
sudo apt update
sudo apt install docker-ce
# at the end got "Job for docker.service failed because the control process exited with error code."
sudo systemctl status docker.service --full --lines 1000
# many msgs, ending with "Failed to start Docker Application Container Engine."
# rebooted to see if changed, and it did, looks good
# Verify that Docker CE is installed correctly by running the hello-world image
sudo docker container run hello-world
# Another check
sudo docker run -it ubuntu bash
# allow your normal user account to access docker socket
sudo groupadd docker
sudo usermod -aG docker $USER
# log out and back in
# test that it works
docker run hello-world
# failed with "Got permission denied while trying to connect to the Docker daemon socket ..."
# but after overnight/reboot, it works
# if you see "WARNING: Error loading config file ..."
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R
# list docker images
sudo docker image ls
# Yikes ! Docker is taking up about 6 GB on / !
# 5.6 GB for the OpenVAS image alone.
# make docker run upon boot (didn't do this)
sudo systemctl enable docker
# tried creating /etc/docker/daemon.json containing:
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
but it didn't fix my problems with OpenVAS
Later an update for Docker came through the normal Mint Update Manager, but the update failed a bit, some errors in the final scripts.
Docker creates a lot of rules in the iptables FORWARD chain. Seems to create some kind of "bridge" device ? Creates a "docker0" network interface you can see via "ip -c addr".
Installed snap of Docker on Ubuntu GNOME 20.04:
# if deb/apt is installed, remove it:
apt list docker.io
sudo apt purge docker.io
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker # launch new shell, changing group of current user to "docker"
snap install docker
snap info docker
docker --version
docker.help
docker --help | less
docker info | less
docker images | less # show currently installed images
docker ps # show currently running containers
docker run hello-world
docker run -p 2368:2368 ghost # on port 2368; see https://hub.docker.com/_/ghost
docker search weather
# To see DockerHub page for an image, go to "https://hub.docker.com/r/IMAGENAME".
docker pull rivethead42/weather-app # install image into system
docker images | less # show currently installed images
docker image inspect IMAGENAME | less
docker image history --no-trunc IMAGENAME >image_history # see steps used to build the image
docker run -it IMAGENAME sh # IF there is a shell in it, runs image and gives shell, so you can see contents
docker run -p 3000:3000 rivethead42/weather-app
# says "listening on port 3000"
# should be accessible via http://localhost:3000/
docker container list
docker stop CONTAINERID
docker images
docker rmi IMAGEID
# if it says there are stopped containers:
docker rm CONTAINERID
docker system prune -a --volumes
docker images
snap remove docker --purge
id # if you're still running as group "docker", ctrl+D to get out
sudo delgroup --system docker
[Recommended:] Installed deb of Docker on Ubuntu GNOME 20.04, generally following Bobbin Zachariah's "How to Install Docker on Ubuntu 20.04":
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common gnupg-agent
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
apt-cache policy docker-ce
# see that all the versions are on docker.com
sudo apt install docker-ce
sudo systemctl status docker --full --lines 1000
sudo usermod -aG docker $USER
newgrp docker # launch new shell, changing group of current user to "docker"
id # see that you're in group "docker" now
docker run hello-world
docker run -p 2368:2368 ghost # on port 2368; see https://hub.docker.com/_/ghost
docker ps -a
# in browser, go to http://localhost:2368/
docker pull spiffytech/weather-dashboard
docker images | less # show currently installed images
docker image inspect spiffytech/weather-dashboard | less
docker inspect spiffytech/weather-dashboard | grep -i -A 1 'ExposedPorts'
docker inspect -f '{{ .Config.ExposedPorts }}' spiffytech/weather-dashboard
# see steps used to build the image:
docker image history --no-trunc spiffytech/weather-dashboard >image_history
docker run -p 8080:8080 spiffytech/weather-dashboard
# says http://127.0.0.1:8080 or http://172.17.0.2:8080
# server ran, but browsers see a placeholder page from the app, probably not a Docker issue
# server says ctrl+C to kill, but that doesn't work
docker container list
docker stop CONTAINERID
docker images
docker image rm --force spiffytech/weather-dashboard
# IF there is a shell in it, runs image and
# gives shell, so you can see contents:
docker run -it IMAGENAME sh
docker info | grep "Docker Root Dir"
sudo ls /var/lib/docker
docker info | grep "Storage Driver"
sudo ls /var/lib/docker/overlay2
# remove Docker
docker container stop $(docker container ls -aq)
docker images
docker system prune -a --volumes
docker images
sudo apt purge docker-ce
sudo apt autoremove
cat /etc/apt/sources.list
sudo add-apt-repository -r "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
cat /etc/apt/sources.list
sudo apt update
Installed native Docker on Solus 4.3 Budgie:
sudo eopkg install docker
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker USERNAME
# log out and back in
sudo systemctl status docker --full --lines 1000
# Go to https://hub.docker.com/search?q=&type=image
# and search for image you want.
# Selected image: emdoyle/backgammon
# No description, but if you click on Tags you can
# get some way to view contents, a little.
docker pull emdoyle/backgammon
docker images
# To look at contents:
docker run -it emdoyle/backgammon sh
docker run -p 8080:8080 emdoyle/backgammon
# Error: missing module 'numpy'
# Was able to run "pip install numpy" inside the
# image, but that didn't fix the problem.
# Tried searching for other games, using terms
# 'game' and then 'chess', but descriptions
# are minimal, nothing obvious found. Asked
# on reddit for any pointers, got a chorus of
# "Docker is for servers" responses.
Create an image
Morgan's "How to Use Docker with Python in Just 10 Minutes"
Ashish's "Docker 101: A basic web-server displaying hello world"
Radek Ostrowski's "Getting Started with Docker: Simplifying DevOps"
Evaluations
From people on reddit 5/2020:
Docker gives similar benefits to a VM, but is very lightweight. There is negligible difference between running a normally-installed app and that same app in a container, in terms of memory, cpu performance, or disk speed. There can be significant disk space overhead, however, depending on the images you use.
A container image usually consists of a distro [really just libraries ?] and application(s), but not a kernel or init system. The host kernel is shared with the containers.
There are many benefits:
- If set up properly (e.g. non-root user), a compromised app cannot affect your host system.
No services run in a container, only the app you specify. No ports are accessible, except
those you specify. You can make the entire filesystem read-only. You can isolate a container's
network from the host and other containers.
- Most or all of the app setup and configuration is packaged into a container, often by
an expert. This can be a huge time-savings.
- You can start over at any time. You can blow away a container in seconds. If you want,
you can spin it back up in the original state.
- You can run many containers at a time. There is little overhead except for whatever
resources the apps take.
...
Be careful on advertising the security angle. Containers definitely provide separation but they don't actually have much in the way of security provisioning. I work in this segment of the industry and we don't even really trust actual VMs (eg, Xen or KVM) as security features and containers even less so.
...
I think the security angle of containers is an overhype situation. Any security they do provide is minimal and could be done better with SELinux and standard DAC.
...
From a security perspective, containers DO provide an atomic deployment model, which prevents bespoke server configs, and enables easy patching. I think that's a boon.
...
There's plenty of docker images that run as root, or ask you to mount the docker socket inside the container itself.
The docker daemon is a security nightmare. And in addition, containers allow you to easily bypass auditing: "docker run -v /:/host fedora:latest bash -c 'rm - rf /host'"
There's plenty of ways of shooting yourself in the face. Podman follows a much better security model and allows rootless containers (but still lacks some features, such as nested podman in unprivileged containers).
...
I would say one of the biggest negatives is security updates. Your system has a whole bunch of packages and you do updates and it updates all the stuff on your system. Notably, it doesn't update the stuff your docker images contain, so all your containers miss out on all system updates. You instead must rely on the image maintainer to update the image and then you must pull the updated image. Often the image never gets updated. This means that while docker is pretty effective at keeping stuff in the containers, it tends to be easier to attack them and the stuff inside the containers is more vulnerable.
...
One advantage not mentioned is you can just run Docker containers without actually installing anything other than Docker. You can run a dozen containers and then just remove them all and have your vanilla OS. It makes it very easy to get your machine up and running again after a format or swapping out your pi's SD card.
...
Also avoiding "dependency hell" or conflicting packages on the same system. Want to run 4 different versions of Ruby on the same box with relative ease? Docker.
...
Docker versus Snaps: they are similar only in that both are called "containers" and both have runtimes written in golang. Completely different purposes and design. Docker puts a custom execution environment around a process or maybe a couple and knits together all the different bits and pieces through virtualized network communication (this is to make things "scalable", you can knit together several servers as a cluster and spawn more processes of one kind or other as they're needed). Snaps put a custom execution environment around a complete application with the idea that it is decoupled from the host system in a few interesting ways. Snaps are all-in-one, and meant to be a way for system users to consume software. Docker containers are convenient minimal execution platforms for small pieces of a larger service system.
...
I like that I can basically just build an image and run it wherever I need to whether it be from container registry or building from a Dockerfile. It works really well with making infrastructure more immutable. Rather than patching stuff, I just create a new updated image, regression test and swap the new one in which results in less downtime overall. I don't really want to install a ton of dependencies on hosts if I can avoid it. I'd rather just isolate them to a container.
The benefits grow too when adding container orchestration like Kubernetes. It makes it easier to use containers in things things like HA, services that need to have load balancers, maintaining healthy services, etc.
From someone on reddit:
TBH I don't like to use someone's image [from Docker Hub] unless they have a GitHub repo link
and I can inspect the Dockerfile and any scripts added to the image. Even then I like to build
my own image based off theirs. This is also a good excuse to learn how to build a docker image.
I've used this method and moved all my workloads minus system mail messages to docker containers.
Makes rebuilding and redeploying a server super simple and fast.
Paraphrased from someone on reddit:
"The Docker user/group can command the Docker daemon through a socket. If that user/group issues a command such as 'docker run -v /:/mnt:rw debian', the root user inside the container will have full root access to the host filesystem. So you must treat that Docker user/group as if it has sudo or root privileges."
Docker has a known problem where it adds an iptables rule, breaking firewalling ? issue 4737 and article1 and article2 and ufw-docker. Best way to fix is Docker's "Docker and iptables" ? Also: "fail2ban by default operates on the filter chain, which means Docker containers are not being filtered the way you might expect."
Flatpak
Basics
Desktop-only; single-app; sandboxed; Linux-only; needs installed foundation.
(Originally xdg-app)
Wikipedia's "Flatpak"
Flatpak.org
Flatpak on GitHub
Joey Sneddon's "Linux Mint Throws Its Weight Behind Flatpak"
Cassidy James Blaede's "elementary AppCenter + Flatpak"
From TheEvilSkeleton's "An introduction to Fedora Flatpaks":
"Flatpak is a distribution-agnostic universal package manager, leveraging bubblewrap to separate applications from the system, and OSTree to manage applications. There are multiple Flatpak repositories (remotes in Flatpak terminology), such as Flathub (the de-facto standard), GNOME Nightly, KDE and finally Fedora Flatpaks, Fedora Project's Flatpak remote."
From post by Christian F. K. Schaller:
"The main challenge [for application-deployers] was that the platform was moving very fast and it was a big overhead for application developers to keep on top of the changes. In addition to that, since the Linux desktop is so fragmented, the application developers would have to deal with the fact that there was 20 different variants of this platform, all moving at a different pace. ... So we concluded we needed a system which allowed us to decouple of application from the host OS ...""
Images and getting started
Jack Wallen's "How to Install and Use Flatpak on Linux"
Linux for Devices' "How to apply any theme to a Flatpak application"
Gornius / Flatpak - use system theme.sh (Custom themes either in home dir or installed through flatpak)
Jesse Smith's "Flatpak vs Snap sandboxing"
Details
man flatpak
# If not installed:
sudo apt install flatpak
# Manage permissions:
flatpak install flatseal
# Manage apps and remotes:
flatpak install io.github.flattool.Warehouse
# List known remote repositories:
flatpak remotes
# Distro may have a distro-specific or DE-specific repo specified.
# If empty, do:
sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
# Find flatpaks:
flatpak search STRING
# Example install:
flatpak install flathub Meteo
flatpak list
flatpak run com.gitlab.bitseater.meteo
flatpak uninstall Meteo
# List info about installed flatpaks:
flatpak list
flatpak list --app
flatpak list --all
flatpak list --app --columns=size,name,app
flatpak list --all --columns=size,name,app,ver
# Save list of all installed flatpaks:
flatpak list --app --columns=application | sed ':a;N;$!ba;s/\n/ /g' >apps.txt
# Install the same flatpaks on another system:
flatpak install flathub $(cat apps.txt) --noninteractive
flatpak install APPNAMEs
flatpak list # make note of APPID
du -sh /var/lib/flatpak/app/* # sizes of all installed apps
ls -l ~/.var/app # user's config should be under here
flatpak info APPID
flatpak permission-show APPID
# To change permissions, run Flatseal.
# Could use "flatpak permission-set" on CLI, but it's complicated:
# https://docs.flatpak.org/en/latest/flatpak-command -reference.html#flatpak-permission-set
flatpak info -m APPID # shows SOME of the settings done by flatseal
flatpak info --show-permissions APPID # shows rest
# BUT: any directory permissions are overridden by anything
# the user does in a file open/save dialog; no way to stop
# the user from reading/writing where they choose in a dialog.
# System-level:
sudo flatpak override APPID --filesystem=/xyz # adds /xyz to filesystem list
# To delete it, delete file in /var/lib/flatpak/overrides
# User-level:
flatpak override --user APPID --filesystem=/xyz # adds /xyz to filesystem list
# To delete it, delete file in ~/.local/share/flatpak/overrides or run Flatseal.
ls /var/lib/flatpak
# If app has command-line options:
flatpak run APPFULLYQUALIFIEDNAME --help
# Update all images to latest:
flatpak update
# Revert an app to a previous version:
flatpak remote-info --log flathub APPID
sudo flatpak update --commit=COMMITCODE APPID
# To poke around inside an image, out of curiosity:
flatpak run --devel --command=sh APPFULLYQUALIFIEDNAME
# Cleanups:
sudo bash -c "! pgrep -x flatpak && rm -r /var/tmp/flatpak-cache-*"
flatpak uninstall --unused
# Also see application "Flatsweep".
# Removing Flatpak entirely
flatpak list --app # see apps only
# remove the apps
flatpak list # see everything
# only support runtimes should be left
# now run native package manager to remove flatpak
# then cleanup:
sudo rm -r /var/tmp/flatpak-cache-*
To get beta versions of apps, you need to add the Flathub Beta repo:
sudo flatpak remote-add flathub-beta htps://flathub.org/beta-repo/flathub-beta.flatpakrepo
sudo flatpak install flathub-beta APPNAME
Disappointed with Flatpak security model:
I installed the Flatpak image of Firefox browser, and thought it had a bug. I have Flatpak permissions set (through Flatseal) to not allow access to ~/Documents, yet I can save files into there.
Turns out that this is works-as-designed, a surprising new security model for containerized GUI apps. There is no way to prevent the user from writing a file to anywhere they choose. User action in a GUI dialog overrides any Flatpak/Flatseal permission settings.
See Disappointed with Portals security model
TheEvilSkeleton's "What Not to Recommend to Flatpak Users"
Daniel Aleksandersen's "How to back up Flatpak user-data and apps"
Evaluations
I like Flatpaks, but they have glitches in the "portals" area: Sometimes apps don't remember "last folder used". Sometimes a file-handling dialog or a "what application should be used to open files of type X ?" dialog opens in the background (because it's a separate app). The file-handling portal dialogs don't enforce directory or warn about restrictions set in Flatseal. Sometimes if you close applications and then try to unmount a disk, it says "busy", because a portal process still has a file open.
Nicholas Fraser's "Flatpak Is Not the Future" (about a lot more than just Flatpak)
TheEvilSkeleton's "Response to 'Flatpak Is Not the Future'"
Flatpak - a security nightmare
Flatpak - a security nightmare - 2 years later
TheEvilSkeleton's "Response to flatkill.org"
Joey Sneddon's "Flatseal Makes it Easy to Manage Flatpak Permissions"
Hugo Barrera's "The issue with flatpak's permissions model"
Claim on reddit: "Flatpak has huge security loophole called portal where an app can get access to private data."
Hanako's "Flatpak is not a sandbox"
Thom Holwerda's "If your immutable Linux desktop uses Flatpak, I'm going to have a bad time"
Martijn Braam's "Developers are lazy, thus Flatpak"
Jorge Castro's "The distribution model is changing"
From reddit:
> "are packages checked for safety before being added to the official flathub repositories?"
"The manifest needed to build the package is checked. The Software itself not."
From someone on reddit:
The reason that Flatpak doesn't require root permissions [for installs] is
because it doesn't change any files that require root permissions.
In other words, it installs applications on a per-user basis in your home folder. You'll notice that the software
you install for one user doesn't appear for the others.
From someone on reddit:
Re: Flatpak when compared with AppImage?
Portability:
Flatpak is a sandbox containing a complete runtime and application. This means it is portable to a large range of systems without any extra testing or integration work required.
AppImage has no concept of "runtimes" and it relies on the host for the majority of libraries, meaning it is only as portable as the packager explicitly tests it to be, and it is very easy to make mistakes or introduce incompatibilities.
Security:
Flatpak as mentioned is a sandbox and is being developed in coordination with other projects to improve security.
AppImage has no sandbox and you have to rely on using external tools to add such things. In some cases not being sandboxed would be considered an advantage.
Distribution:
Flatpaks are distributed via repositories which anybody can host, so users can get efficient updates in a timely manner.
AppImage again relies on extra tooling for this so you often don't get updates and they are not distributed in an efficient format.
The reason for AppImage's success is that the developer is very prolific in doing the work of packaging everything he finds on the internet. And I guess it's a cool demo to just click on a binary. It isn't a particularly advanced or forward-looking technology.
I guess I'll also put a disclaimer that I contribute to Flatpak; Not because I am paid to or anything, it is just a solid technology that improves the ecosystem in my eyes.
From someone on reddit:
Portability:
Flatpak is a sandbox containing a complete runtime and application. This means it is portable to a large range of systems without any extra testing or integration work required.
AppImage has no concept of "runtimes" and it relies on the host for the majority of libraries, meaning it is only as portable as the packager explicitly tests it to be, and it is very easy to make mistakes or introduce incompatibilities.
Security:
Flatpak as mentioned is a sandbox and is being developed in coordination with other projects to improve security.
AppImage has no sandbox and you have to rely on using external tools to add such things. In some cases not being sandboxed would be considered an advantage.
Distribution:
Flatpaks are distributed via repositories which anybody can host, so users can get efficient updates in a timely manner.
AppImage again relies on extra tooling for this so you often don't get updates and they are not distributed in an efficient format.
The reason for AppImage's success is that the developer is very prolific in doing the work of packaging everything he finds on the internet. And I guess it's a cool demo to just click on a binary. It isn't a particularly advanced or forward-looking technology.
I guess I'll also put a disclaimer that I contribute to Flatpak; Not because I am paid to or anything, it is just a solid technology that improves the ecosystem in my eyes.
Re: Flatpak when compared with AppImage?
The one huge advantage of Appimages is that it allows you to keep multiple versions of the same software around, in a very trivial and easy to understand manner. The App is just a file after all.
Flatpak does not do this, it has some support for multiple versions, but the control over what versions you are allowed to keep is up to the developer not the user. So the developer can offer a "beta" branch, but if the beta borks the user has no way to go back to the previous beta version.
One area were both fail is modular tools. They both assume that your software comes as one big monolithic executable. If your software comes as a bunch of little tools that can be used together, you are SOL with both of them.
The sandboxing of Flatpak is so far mostly smoke and mirrors, as it lacks granularity and faking. Wouldn't trust it to stop any real attempt at breaking stuff at this point. Might change in the future.
The way Flatpak deals with dependency is honestly complete garbage, as it only allows three dependencies: GNOME, KDE and Freedesktop. That's it. If you have a simple Qt app, you have to suck in all of KDE. No granularity or extensibility.
Overall I prefer AppImage, since it's much simpler and easier to understand from a user's point of view. Flatpak adds a lot of complexity, while still failing to solve any of the interesting problems.
The one huge advantage of Appimages is that it allows you to keep multiple versions of the same software around, in a very trivial and easy to understand manner. The App is just a file after all.
Flatpak does not do this, it has some support for multiple versions, but the control over what versions you are allowed to keep is up to the developer not the user. So the developer can offer a "beta" branch, but if the beta borks the user has no way to go back to the previous beta version.
One area were both fail is modular tools. They both assume that your software comes as one big monolithic executable. If your software comes as a bunch of little tools that can be used together, you are SOL with both of them.
The sandboxing of Flatpak is so far mostly smoke and mirrors, as it lacks granularity and faking. Wouldn't trust it to stop any real attempt at breaking stuff at this point. Might change in the future.
The way Flatpak deals with dependency is honestly complete garbage, as it only allows three dependencies: GNOME, KDE and Freedesktop. That's it. If you have a simple Qt app, you have to suck in all of KDE. No granularity or extensibility.
Overall I prefer AppImage, since it's much simpler and easier to understand from a user's point of view. Flatpak adds a lot of complexity, while still failing to solve any of the interesting problems.
From people on reddit 10/2019:
Re: Flatpak when compared with Snap?
One big difference is that Flatpak is desktop only, snap is designed to also work for servers. As a developer/power user I use many CLI applications including proprietary ones, so from this point of view snaps are more flexible.
...
Flatpak is designed a lot cleaner imo. Snaps are squashfs files which integrate TERRIBLY in modern filesystems. On the other side, flatpaks use the OCI format which is a nicer approach for developers of applications and distributions.
...
Also Flatpaks should be generally much more storage-efficient as individual flatpaks can share runtimes and files for all installed flatpaks are automatically deduplicated.
One big difference is that Flatpak is desktop only, snap is designed to also work for servers. As a developer/power user I use many CLI applications including proprietary ones, so from this point of view snaps are more flexible.
...
Flatpak is designed a lot cleaner imo. Snaps are squashfs files which integrate TERRIBLY in modern filesystems. On the other side, flatpaks use the OCI format which is a nicer approach for developers of applications and distributions.
...
Also Flatpaks should be generally much more storage-efficient as individual flatpaks can share runtimes and files for all installed flatpaks are automatically deduplicated.
From someone on reddit: "flatpak can't run container engines ... and so things like lxd, firejail, and/or docker can't run in a flatpak"
From someone on reddit 5/2020:
- AppImage tends to blow up the size and is easy to get wrong, because it seems to work but then
doesn't on other distros if you forget to bundle some library.
- Snap only works well on few distros (full functionality requires AppArmor,
which is incompatible with SELinux).
- Flatpak is pretty much universally supported.
From someone on reddit 6/2020:
Terminal apps just work on snap, that's why Canonical heavily pushes
snap on the server while flatpak is completely useless here.
So here I am using flatpak on the workstations and snap on the servers.
...
The first issue is that flatpaks only work if a desktop is loaded. I think this is an issue that can be solved tho. The second issue is that flatpaks depend on so-called portals, again something that does not currently work without a desktop loaded. Then we have the issue that snaps can be called like normal apps in the terminal, while you have to do something like "flatpak run org.something.whatever", or you add a certain folder to your path and only have to call "org.something.whatever".
...
The first issue is that flatpaks only work if a desktop is loaded. I think this is an issue that can be solved tho. The second issue is that flatpaks depend on so-called portals, again something that does not currently work without a desktop loaded. Then we have the issue that snaps can be called like normal apps in the terminal, while you have to do something like "flatpak run org.something.whatever", or you add a certain folder to your path and only have to call "org.something.whatever".
From someone on reddit:
> pros and cons of using the Firefox flatpak compared to a Firefox apt/rpm ?
For something performance-critical like a web browser, I would really stick with installing through [package manager]. Firefox's Flatpak is maintained by Mozilla themselves which is a nice touch. The sandboxing in Flatpak probably won't do too much for you in Firefox's case since you're gonna be poking a lot of holes for camera, sound, filesystem, etc. Take it with a grain of salt since I haven't actually read the manifest myself.
My experience with performance in Flatpaks comes from running Dolphin Emulator and dropping 30 frames a second compared to its DNF equivalent on my machine. I do use Flatpak for apps like RuneScape and Minecraft though and can't complain.
My usage of Flatpak is pretty strictly bound to installing closed-source third-party software. Discord, Slack, Spotify, etc. I do have Element (Riot) and Fractal installed through Flatpak for various reasons.
For something performance-critical like a web browser, I would really stick with installing through [package manager]. Firefox's Flatpak is maintained by Mozilla themselves which is a nice touch. The sandboxing in Flatpak probably won't do too much for you in Firefox's case since you're gonna be poking a lot of holes for camera, sound, filesystem, etc. Take it with a grain of salt since I haven't actually read the manifest myself.
My experience with performance in Flatpaks comes from running Dolphin Emulator and dropping 30 frames a second compared to its DNF equivalent on my machine. I do use Flatpak for apps like RuneScape and Minecraft though and can't complain.
My usage of Flatpak is pretty strictly bound to installing closed-source third-party software. Discord, Slack, Spotify, etc. I do have Element (Riot) and Fractal installed through Flatpak for various reasons.
Snap (Snapcraft, Canonical)
Basics
Intended for desktop and server and IoT; single-app; sandboxed; Linux-only; needs installed foundation.
Snapcraft
Snapcraft.io forum (feature requests etc)
Images and getting started
Abhishek Prakash's "Complete Guide for Using Snap Packages In Ubuntu and Other Linux Distributions"
Jack Wallen's "Get Started with Snap Packages in Linux"
Eric Londo's "Snapcraft: Software for All"
Canonical snapcraft "Getting Started"
Joey Sneddon's "How to Change Snap App Theme on Ubuntu"
Jesse Smith's "Flatpak vs Snap sandboxing"
Services you might want to run in containers:
awesome-selfhosted
Details
snap list # see all snaps installed in system
sudo du -sh /var/lib/snapd /snap # disk space of all snaps installed
snap find PATTERN # find new snaps to install
snap install SNAPNAME
ls -l ~/snap # user's config should be under here
snap info SNAPNAME # show name, description, ID, etc
snap info --verbose SNAPNAME # adds info about permissions
# see https://snapcraft.io/docs/snap-confinement
sudo snap get SNAPNAME # show configuration (key-value pairs)
sudo snap get SNAPNAME KEYNAME # show nested values for a key
sudo snap get system # show configuration (key-value pairs)
sudo snap get system refresh # show nested values for a key
# By default, last 3 versions of each app are cached.
# Change to minimum:
sudo snap set system refresh.retain=2
# How to get/set "device-serial" value for this system ?
# sudo less /var/lib/snapd/state.json
# sudo jq . /var/lib/snapd/state.json | less
sudo jq .data.auth.device.serial /var/lib/snapd/state.json
sudo sed -i "s/OLDVALUE/NEWVALUE/" /var/lib/snapd/state.json
sudo apparmor_status # see that snaps have profiles
# Where are the AppArmor profiles ? I think the basic one is:
more /etc/apparmor.d/usr.lib.snapd.snap-confine.real
# and then individual tweaks are in:
cd /var/lib/snapd/apparmor/profiles
Snap permissions:
snap info --verbose APPNAME # overall info, including confinement
snap connections APPNAME
# the "plug" is the consumer, the "slot" is the provider
snap interfaces APPNAME
snap connect APPNAME:PLUGINTERFACENAME APPNAME:SLOTINTERFACENAME
snap connect APPNAME:PLUGINTERFACENAME :SLOTINTERFACENAME # a system/core slot
Maybe add a connection to slot ":system-files" or ":removable-media" ?
Snapcraft's "Interface management"
Snapcraft's "Supported Interfaces"
But I don't see for example how to restrict KeePassXC to one dir for database and another for attachments. The app just provides a "keepassxc:home" plug, and that is connected to the ":home" slot, so app has full access to everything under my home directory ? There seems to be no way for me to define a custom slot, and the two names have to match anyway. My only choices are to disconnect completely, or connect completely ? [By default, an app can read/write in ~/snap/APPNAME/common and ~/snap/APPNAME/current.] Relevant: jdstrand comment 3/2018. Created feature request.
I think permissions for a Snap image can be managed through the Snap Store ?
Only the snap's dev can change this: Snapcraft's "Snap confinement"
Ubuntu's "Create your first snap"
Jamie Scaife's "How To Package and Publish a Snap Application on Ubuntu 18.04"
Merlijn Sebrechts' "Verifying the source of a Snap package"
A snap can be built on top of various versions of the Ubuntu API ("bases"); 16.04 and 18.04 are available. See for example "core18" in output of "snap list".
How to prevent a snap from ever being updated
New facility 11/2022:
Igor Ljubuncic article
Instead of running "snap install foo", do "snap download foo ; snap install foo.snap --dangerous". That sideloads the snap onto your system, so that it won't get updates from the store. (Doesn't work for "core" snap.)
Alan Pope's "Disabling snap Autorefresh"
Igor Ljubuncic's "How to manage snap updates"
Instead of running "snap install foo", do "snap download foo ; snap install foo.snap --dangerous". That sideloads the snap onto your system, so that it won't get updates from the store. (Doesn't work for "core" snap.)
Alan Pope's "Disabling snap Autorefresh"
Igor Ljubuncic's "How to manage snap updates"
dr knz's "Ubuntu without Snap"
Pratik's "How to Remove Snap From Ubuntu"
Alan Pope's "Snap Along With Me"
Alan Pope's "Hush Keyboards with Hushboard" (building a snap)
Snap Evaluations
Snaps work okay for me. But I distro-hop, and on any given distro, Snap may or may not be supported, but Flatpak WILL be supported.
Good things / intended features
- Install / remove app without affecting the rest of the system, especially other apps.
- Bring dependencies with snap, so easier to install / remove.
- Bring dependencies with snap, so fewer combinations / environments for devs and Support people to deal with.
- There are some cases where app A wants to see glibc version N in your system, and app B wants to
see glibc version N+1, and you want to use both apps A and B. With snaps, you can do that.
- There are some cases (mostly devs, or multi-user systems) where you want
to be able to install and run both version N of app A
and version N+1 of app A in your system. With snaps, you can do that.
- App updates independent from system updates, if user wishes. E.g. you could use a LTS system/distro while
doing rolling updates of snap apps.
Merlijn Sebrechts' "Why Linux desktop apps need containers" - App updates without having to reboot OS (some distros are forcing OS restart
if you update any native packages).
- For devs / distro packagers / repo maintainers: build app once and it runs on many
distros and many releases of a distro.
- Shift burden of packaging work from many distro packagers / repo maintainers to one app packager/dev.
Especially valuable for large and frequently-updated apps such as browsers,
and large app suites such as Office suites.
- More direct connection between users and app developers.
No longer a distro builder/maintainer between them.
- Single source for software (Snap Store), although that can be bypassed if you wish.
More familiar to new users who are used to single app/extension Stores in Android,
Apple, Chrome, Firefox, Burp Suite, VS Code, GNOME desktop, Thunderbird, more.
- When installing a deb, any scripts provided by the app dev run as root and unrestricted.
When installing a snap, only snapd is running as root, any scripts from app dev are
running non-root and contained.
- A user who does not have root privileges can install a snap but not a deb.
Interview of Alan Pope
Liam Proven article
Jorge Castro's "The distribution model is changing"
From /u/lutusp on reddit 4/2020:
Flatpaks, Snaps and Appimages solve the "dependency hell" issue by packaging all required dependencies with the application itself in a separate environment. This solves an increasing serious problem (inability to install and run some applications) with another one -- an application's download and storage size and startup time goes up.
By contrast, an application installed from the normal repositories must find all its dependencies (right version and properties) in the installed libraries, which unfortunately is a declining prospect in modern times.
From someone on reddit:
> What is the potential of snaps? What does it do better than apt?
Snaps are a great way to isolate the program you are executing from the rest of the system. So the main idea behind Snaps is security and ease of install (distro-agnostic), as .deb based programs (and many others like it) are able to access the entire disk (with read-only permission), which can create a lot of security breaches in the system overall. With Snaps you are able to control what the software can read/write, what kind of hardware it can access (i.e. webcam or a microphone) and a lot of other options.
From someone on reddit:
"snaps are compressed, and are not uncompressed for installation -- certain snaps actually are smaller than their installed deb-packaged counterparts"
From /u/timrichardson on reddit 1/2020:
Once, people said the GUI applications were way too full of bloat. And before that, people despised compilers;
hand-crafted assembly language is smaller and faster. The history of coding is to trade off memory and disk space
for more efficient use of humans; it's the history of the algorithms we use and the tools we use,
it's the reason for layer upon layer of abstraction that lets humans steer modern computers.
Like the arrow of time, this is a one-way trend, but unlike time, it doesn't just happen,
it happens because it saves the valuable time of the creators: the coders, the packagers.
Snaps and flatpaks are another example of this. The less time wasted repackaging apps for a
million different distributions, the more apps we all get. When you've got 2% market share of a stagnant
technology (desktop computing), you should grasp at all the help you can get, if you want to see it
survive and maybe even thrive.
And by the way, the binary debs you are used to are not targeted or optimised for your hardware, they target a lower common denominator. The difference can be significant, look how fast Clear Linux is. Maybe you should swap to Gentoo. My point is that you already accept bloat and performance hits in the name of convenience, you are used to it so you don't notice. But traditional packaging is an old technology, is it is so surprising that there are new ideas?
And by the way, the binary debs you are used to are not targeted or optimised for your hardware, they target a lower common denominator. The difference can be significant, look how fast Clear Linux is. Maybe you should swap to Gentoo. My point is that you already accept bloat and performance hits in the name of convenience, you are used to it so you don't notice. But traditional packaging is an old technology, is it is so surprising that there are new ideas?
Perhaps Snaps are more oriented toward server and IoT, rather than desktop. Updating on server and IoT could benefit greatly from separating OS update from apps update. Given that Canonical makes little/no money from desktop, this would make sense. And things such as Snap launch time are not such big issues on server and IoT. But probably most server software is run via containers, not native apps, so Snap wouldn't apply.
Negative views
From /u/10cmToGlory on reddit 2/2019:
The snap experience is bad, and is increasingly required for Ubuntu
As the title says. The overall user experience with snaps is very, very poor. I have several apps that won't start when installed as snaps, others that run weird, and none run well or fast. I have yet to see a snap with a start up time that I would call "responsive". Furthermore the isolation is detrimental to the user experience.
A few examples:
- Firefox now can't open a PDF in a window when installed as a snap on Ubuntu 18.04 or 18.10.
The "open file" dialog doesn't work. The downloads path goes to the snap container.
- Stuff that I don't need isolated, like GNOME calculator, is isolated. Why do I care?
Because as a snap it takes forever to start, and the calculator I'd really like to have start quickly.
- Other snaps like simplenote take so long to open I often wonder if they crashed.
- Many snaps just won't open, or stop opening for a plethora of reasons.
Notables include bitwarden, vs code (worked then stopped, thanks to the next point),
mailspring, the list goes on.
- The auto-updating is the worst thing ever. Ever. On a linux system I can disable
auto-updates for just about everything EXCEPT snaps. Why do I care? Well, one day,
the day before a deadline, I sat down to do some work, only to find that vs code
wouldn't open. A bug was introduced that caused it to fail to open, somehow.
As the snap auto-updated, I was dead in the water until I was able to remove it and
install it via apt (which solved the problem and many others). That little auto-update
caused me several hundred dollars in lost revenue that day.
[New facility 11/2022:
Igor Ljubuncic article]
- Daemons have to be started and stopped via the snap and not systemd. This is a terrible design choice,
making me have to change my tooling to support it for daemon (which I'm not going to do, by the way).
A great example of that is Ansible - until very recently there was no support for snaps.
- Logging is a nightmare. Of course all the logs are now isolated too, because for some reason
making everyone change where to look for help when something is not working just sounds like
a good idea. As if it's not enough that we have to deal with binary systemd logs,
now we get to drill into individual snaps to look for them.
- Most system tools are not prepared for containerization, and make system administration
much more difficult. A great example is mount. Now we get to see every piece of software
installed on the system when we run mount. Awesome, just what I wanted. This is just one example of many.
- Snaps are slowing down my system overall, especially shutdown. Thanks to it's poor design,
there are multiple known issues with snaps and lxd, for example, shutting down running containers.
This is just one of many that makes me have to force shutdown my machine daily.
- Creating a snap as a developer is difficult and documentation poor. You have to use a
Ubuntu 16.04 image to create your snap, which alone makes it unacceptable. I found myself
in dependency hell trying to snap package some software that used several newer libraries
than what Ubuntu 16.04 had on offer. The YAML file documentation is laughably bad,
and the process so obtuse that I simply gave up, as it just wasn't worth the effort.
As for Ubuntu in general, I'm at a crossroads. I won't waste any more time with snaps, I just can't afford to and this machine isn't a toy or a hobby. It seems that removing snaps altogether from a Ubuntu system is becoming more and more difficult by the day, which is very distressing. I fear that I may have to abandon Ubuntu for a distro that makes decisions that are more in line with what a professional software developer who makes their living with these machines requires.
From /u/HonestIncompetence on reddit:
IMHO that's one of several good reasons to use Linux Mint rather than Ubuntu.
No snaps at all, flatpaks supported but none installed out of the box.
From /u/MindlessLeadership on reddit 10/2019:
... issues with Snap as a Fedora user.
- The only "source" for Snaps, the Snap store, is closed-source and controlled by a commercial entity,
Canonical. Sure, the client and protocol are open source, but the API is unstable and the repository url
is set at build-time. Even a Canonical admitted at Flock it was unpractical to build another source right now.
- Snap relies on many Ubuntu-isms, it obvious it was never made originally as a cross-distro package format.
It's annoying to see it advertised as a cross-distro package format, when as a Fedora user, I can tell
you Snap does not work nicely with Fedora (it has improved somewhat in the last year), with SELinux issues etc.
At one point running Snap would make the computer nearly freeze up because the SELinux log would be getting flooded.
It also relies on systemd, although that itself isn't an issue but it raises design questions.
- Similar to above, snapcraft only runs on Ubuntu. So you have to use Ubuntu to build a Snap.
- /snap and ~/snap. If you don't do the former, you can't run 'classical snaps'. This not only violates the FHS,
but doesn't work when / is RO such as under OStree systems such as Silverblue.
- The reliance of snapd and relying on loopback mounting. I don't really like df showing a line for each
application/runtime installed, even if it's not running and the entire thing of at-boot needing to mount
potentially dozens of loopback files for my applications seems like a massive hack. A recent kernel update
broke on Fedora the way Snap was using to mount loopback files (although it was fixed). Snaps were also broken
because Fedora moved to cgroups2.
- Since they're squashfs images, you can't modify them if you don't have the snapcraft file.
Flatpak as a comparison, stores files you can edit in /var/lib/flatpak.
- If I wanted to use Ubuntu to run my applications (Snap uses an Ubuntu image), I would use Ubuntu.
- snapd needs to run in the background to run/install/update/delete Snaps. This seems like a backwards
design choice compared to rpm and Flatpak, which elevate permissions where needed via polkit.
From /u/schallflo on reddit 10/2019:
Snap:
- Does not allow third-party repositories (so only Canonical's own store can be used).
[But you could download snaps manually and install with --dangerous. Someone said also you could download and then "sudo snap ack yoursnap.assert; sudo snap install yoursnap.snap". Also, see lol.] - Only has Ubuntu base images, so every developer has to build on Ubuntu.
- Forces automatic updates (even on metered connections). [New facility 11/2022: Igor Ljubuncic article]
- Depends on a proprietary server run by Canonical.
- Relies on AppArmor for app isolation (rather than using cgroups and namespaces like everyone else), which is incompatible with most Linux distributions, yet it keeps advertising itself as a cross-distribution package format.
Merlijn Sebrechts' "Why is there only one Snap Store?"
Liam Proven's "Canonical shows how to use Snaps without the Snap Store"
Wikipedia's "Snap (package manager)"
From /u/ynotChanceNCounter on reddit 1/2020:
It's a bloated sandbox, tied to a proprietary app store, they've gone out of their way to make it as
difficult as possible to disable automatic updates, so now trust in all developers is mandatory.
Canonical's dismissive toward arguments against the update thing, they took the store proprietary
and for their excuse they offered, "nobody was contributing so we closed the source." Excuse me?
And all the while, they're trying to push vendors to use this thing, which means I am stuck with it. And I'm stuck with the distro because they've got the market share, and that means this is the distro with official vendor support for d*mn near everything.
And all the while, they're trying to push vendors to use this thing, which means I am stuck with it. And I'm stuck with the distro because they've got the market share, and that means this is the distro with official vendor support for d*mn near everything.
From people on reddit 3/2020:
Snap is pretty much hard-wired not only to Ubuntu, but also to Canonical. Snap can only use one repository at a time, and if it is not the Canonical's, users will miss most of the packages. ... Also, some snap packages simply assume that DE is GNOME 3.
...
... currently Snap (on the server side I think) is not yet open-source.
Snap automatic update issues
Smaller Fish's "Snap Updates Happen Without User Consent"
New facility 11/2022: Igor Ljubuncic article
I think also you get updates on the developer's schedule. So suppose some horrible security hole is found in library X. Each snap (and flatpak and appimage) in your system may have its own copy of library X. You can't update one copy (package) of library X and know that the issue has been handled. [I'm told that flatpak allows sharing of libraries, if the developer sets that up explicitly, maybe in a case such as N flatpak apps from the same vendor.] [But see Drew DeVault's "Dynamic linking" (not about snaps).]
How is RAM consumption affected ? If I have 10 snaps that all have version N of a library, I'm told the kernel will see that and share the same RAM for that library. Suppose all 10 have SLIGHTLY different versions of that library, point-releases ?
Apparently at boot time there is a "mount" operation for each of your installed snap apps; see output of "systemd-analyze blame | grep snap-". But they're not actually slowing down startup: in my system, "sudo systemd-analyze critical-chain" shows about 1 msec due to snap stuff, and it's not those mount operations.
Many people complain that Snaps are slow to launch. Explanation paraphrased from /u/zebediah49: "Has to create a mount-point and mount up a filesystem, load up everything relevant from it -- and since it's a new filesystem, we've effectively nuked our cache -- and then start the application. In contrast to normal, where you just open the application, and use any shared objects that already were cached or loaded."
Daniel Aleksandersen's "Firefox contained in Flatpak vs Snap comparison"
I think Snap (and Flatpak) has no built-in crash-reporting mechanism, similar to ubuntu-bug on Ubuntu or abrt on Fedora. Something that gathered info for you and sent you off to the right place to report.
From people on reddit 4/2020 - 6/2020:
- closed-source server component.
- hard-coded canonical repos.
- limited control over updates.
- ubuntu pushes it in situations users feel it isn't useful (some default apps are snaps, apt can install snaps without the user noticing).
- a few technical issues, like long startup time when launching an app for the first time (I've even seen cases where the app didn't launch at all the first time), theming issues, a too-restrictive sandbox, etc.
- you can't move or rename ~/snap.
- there are some security functions such as limiting which directories the snaps can access, and with development tools, having to redo your directory structures to accommodate draconian hard-coded is a PITA.
- it is entirely within the control of canonical / Ubuntu with the snapcraft store being the
only place to distribute snap packages.
[But you could download snaps manually from anywhere and install them with --dangerous.] - it creates a bunch of virtual storage devices, which clutters up device and mount-point listings, and maybe slows booting.
- bloats system with unnecessary duplicates of dependencies both on disk and in RAM.
- snap allows designation of only one repo for all snaps; you can't list multiple.
- some people say snap introduces yet another variable into "why doesn't app X use the system theme ?"
- snaps won't function if the /home directory is remoted in certain common ways.
- snapd requires AppArmor [true], won't work under SELinux [means "SELinux alone, without AppArmor" ?].
- all snap-packaged programs have horrible locale support.
- snap software doesn't work with Input Method. That alone makes snap totally useless for me as I cannot input my native language, Japanese, to the snap-packaged software.
Package manager that is constantly-running daemon (snapd), which just seems wrong and un-Linuxy.
One under-handed thing that Ubuntu 20 does: the deb package for Chromium browser actually installs Chromium as a snap. IMO that's deceptive. If it's available only as a snap, don't provide a deb package at all.
Infrastructure-type additions that some people don't like: directory "snap" created in home directory, more mounts cluttering outputs of df/mount commands.
Apparently snapd requires use of systemd, and some people don't like systemd.
raymii's "Ubuntu bungled the Firefox Snap package transition"
Nitrokey's "NextBox: Why we Decided for and Against Ubuntu Core"
4/2020 I installed Ubuntu 20.04 GNOME, and let it use snaps
Ended up with software store and 4 more snap apps in my user configuration (~/snap), and a dozen more for all users (/snap). They seem to work okay, with one big exception: when a snap app needs to launch or touch another app (Liferea launching any downloader, or VS Code opening a link in Firefox). This either fails (Liferea case), or works oddly (VS Code opens new FF process instead of opening a new tab in existing FF process). But: KeePassXC is a snap app, and has no problem opening a link in existing Firefox process. [Later someone said: VS Code is specifying profile "default", so if you've changed to another profile, FF has to open another process. Let it open FF, then set your desired profile as the default, and next time VS Code will open link in existing FF process.]
Some people complain that Ubuntu's store app prioritizes snaps ahead of debs (re-ordering search results to do so), and even has some debs (Chromium) that start as a deb but then install a snap.
Heard: the Chromium snap is broken for 2-factor authorization keys (U2F). Reported since mid-2018, some fixes in pipeline, but still broken. Relevant: Ask Ubuntu's "How to remove snap completely without losing the Chromium browser?"
I'm told: Pop!_OS has adopted a no-snaps policy, Elementary OS has adopted a flatpaks-instead-of-snaps policy, Mint has a no-snaps-by-default policy.
Dev who packaged Liferea as snap said fixing it is complicated, just about as I was giving up on the snap version and changing to the deb version. Works.
VS Code as snap had a couple of issues: won't open a new tab in existing FF process, and seemed to be interpreting snap version of node incorrectly (said "v15.0.0-nightly20200523a416692e93" is less than minimum needed version 8). I gave up, uninstalled the snap version and changed to the deb version. Worked.
The node-based FF extension I was developing can't contact Tor Browser. Removed node.js snap, and did "sudo apt install nodejs" and "sudo apt install npm". But that didn't fix the problem.
9/2020: Changed Firefox in my system from deb to snap. Flatpak and snap have almost same versions in them: snap is a fraction more recent. I don't see a developer or nightly version available in either store/hub. Apparently to get flatpak beta you need to add a flatpka beta repo. Did "sudo apt remove firefox", "snap install firefox", then copied profile from old place to new place, works. But then I started finding a host of bugs in Firefox, mostly having to do with pathnames.
An attraction of containers is that the app dev can build the image and set the permissions, and you report any bugs straight back to the app dev (no middleman). But I'm finding a lot of containers where the app dev has NOT built the image, some third party built it. Which defeats much of the purpose.
11/2020: Changed Firefox in my system from snap to Flatpak. Snap version just had too many bugs.
Changes Canonical/Snapcraft could make to eliminate many objections
- Have a "confirmation" step in any "deb that actually installs a snap" package.
Tell the user what it going to happen, get permission.
- Allow Ubuntu system owner to set policies such as "I don't want snaps in my system"
and "prioritize apt first" in the Ubuntu Software application.
- Open-source the proprietary part of the Snap store software.
See some details in Merlijn Sebrechts' "Why is there only one Snap Store?"
[A user could download snaps manually from anywhere and install them with --dangerous, but that's a bit of an ugly solution. Also see lol]
From predr on Snapcraft forum 8/2020:
"github.com/snapcore
github.com/canonical-web-and-design/snapcraft.io
Only parts missing are server code, Amazon S3 buckets, snap signing (assertions), and database APIs. You won't find these things open-sourced in any good store, for a reason. Everything else is open source."
Response from Merlijn Sebrechts:
"Canonical's official position is that the store is currently woven into their own internal infrastructure. Open-sourcing it would require a massive effort to untangle this and they don't think it's worth the effort." - Have some kind of policy board overseeing the store, that includes outside people.
AppImage
For desktop and server; single-app; not sandboxed; Linux-only; no installed foundation.
Doesn't have the security/isolation features of other container systems, but does have the "all dependencies bundled with the app" feature.
AppImage
wikipedia's "AppImage"
Abhishek Prakash's "How To Use AppImage in Linux"
Alexandru Andrei's "What Is AppImage in Linux?"
AppImageHub
Joey Sneddon's "Find, Download, and Update AppImages Quickly with this Neat Tool"
Joey Sneddon's "Gear Lever"
Lubos Rendek's "Building a 'Hello World' AppImage on Linux"
Just find the site for an app you want, and see if they have an AppImage available, matching the CPU architecture you have. Download it and set execute permission on the file. Then run it.
See what AppImage apps are installed: "ls -d /usr/bin/com.*" ?
Heard on Linux Cast podcast: Each AppImage really should install a .desktop file automatically, so GUI user can launch app easily.
I don't know how AppImages handle system/DE theming. Is an image compiled for a specific DE, or uses its own theming, or what ?
The Changelog's "Please stop making the library situation worse with attempts to fix it"
Others
LXC
Wikipedia's "LXC"
linuxcontainers.org
Project home
Rubaiat Hossain's "Everything You Need to Know about Linux Containers (LXC)"
John Ramsden's "A Brief Introduction to LXC Containers"
Eric Kahuha's "LXC vs Docker: Which Container Platform Is Right for You?"
theorangeone's "LXC vs Docker"
Jordan Webb's "LXC and LXD: a different container story"
Antranig Vartanian article
LXD is a container management system which provides a VM-like experience using LXC containers. Each container (group of processes) can have a different (restricted) view of the system's process identifiers, network configuration, devices, mount points. An LXC container is not a VM, in that it just uses the host kernel, not an additional kernel.
From Debian Administrator's Handbook's "Virtualization":
Even though it is used to build "virtual machines", LXC is not, strictly speaking, a virtualization system, but a system to isolate groups of processes from each other even though they all run on the same host. It takes advantage of a set of recent evolutions in the Linux kernel, collectively known as control groups, by which different sets of processes called "groups" have different views of certain aspects of the overall system. Most notable among these aspects are the process identifiers, the network configuration, and the mount points. Such a group of isolated processes will not have any access to the other processes in the system, and its accesses to the filesystem can be restricted to a specific subset. It can also have its own network interface and routing table, and it may be configured to only see a subset of the available devices present on the system.
These features can be combined to isolate a whole process family starting from the init process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a "container" (hence the LXC moniker: LinuX Containers), but a rather important difference with "real" virtual machines such as provided by Xen or KVM is that there is no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include excellent performance due to the total lack of overhead, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).
From Teknikal's_Domain article:
LXC uses one additional program, lxd, and native features of the Linux kernel to orchestrate everything (kinda like Docker, but more extreme).
A Linux container, conceptually, is meant more as a general-purpose Linux environment, and is also, conceptually, simpler: a filesystem archive and a configuration metadata file. Yes, that's all there is to it. Every container is a full Linux userland: same systemd, same file tree, same everything. Unlike Docker images which are more meant to be specific to one "thing" at a time, a Linux Container is more like an entire VM that shares its kernel with its host.
...
... for LXC, a container is a cgroup/namespace combination: the cgroup to set up the container's resource limits, and a namespace that defines the container's boundaries and filesystem access and limitations.
All containers are is a specified filesystem mount, and a configuration specifying what to allow. ...
...
Containers literally use the same kernel running as the host, meaning for the most part, they're free to interact with the outside world as long as its within the bounds of their namespace. The only real control to give is extra filesystem mounts that are allowed into said namespace, and what network interfaces and network abilities are permitted within said namespace. One advantage of using kernel features like that is that resource allocations can be changed live, unlike a VM, or, for that matter, a Docker container, which, by defaults, has no upper limits unless explicitly stated.
Using LXC:man lxc man lxc.conf man lxc.container.conf cat /etc/lxc/default.conf cat /etc/lxc/lxc-usernet ls /usr/share/lxc/templates sudo lxc-checkconfig | less sudo apt install lxc-templates ls /usr/share/lxc/templates sudo lxc-create -t alpine -n test-container sudo lxc-start -n test-container sudo lxc-console -n test-container # log in as root, no password # no way to disconnect the console, have to kill the terminal ? sudo lxc-stop -n test-container https://linuxcontainers.org/lxc/getting-started/ https://www.ubuntupit.com/everything-you-need-to-know-about-linux-containers-lxc/ https://www.how2shout.com/how-to/how-to-install-and-use-lxc-linux-containers-on-ubuntu.html https://www.redhat.com/sysadmin/exploring-containers-lxc lxc-create -n foo -f /etc/lxc/default.conf -t /usr/share/lxc/templates/lxc-local lxc-execute -n foo [-f config] /bin/bash # run an application (as PID 1) lxc-start -n foo [-f config] [/bin/bash] # run a system (lxc-init will be PID 1) lxc-ls -f # list all containers lxc-info -n foo lxc-monitor -n ".*" # monitor states of all containers lxc-stop -n foo -k lxc-destroy -n foo
Using LXD:
Vivek Gite's "Install LXD on Ubuntu 20.04 LTS using apt"
Alan Pope's "LXD - Container Manager"
Linux Hint's "The Complete LXD Tutorial"
[There seems to be a fuzzy dividing line between permission-controls (AppArmor, Firejail, SELinux, seccomp; see "Application Control and Security" section of my Linux Controls page) and containers (LXC, Snap, Flatpak, Docker). The former are facilities with permissions defined and stored in OS/framework, while the latter are packaging/bundling facilities with permissions and network configuration etc defined and stored in each package. Both of them have sandboxing/permissions and share one kernel among all packages/bundles/containers.
There are clear dividing lines between those and virtual machines (which have a kernel per package/image) and bundles such as appimage and Python Virtualenv (which don't have sandboxing/permissions).]
-
Python Virtualenv
Doesn't have the security/isolation features of other container systems, but does have the "all dependencies bundled with the app" feature.
Virtualenv
John Mathews' "Virtual environments and python versions"
Moshe Zadka's "What you need to know about containers for Python"
-
Zero Install
Zero Install
Wikipedia's "Zero Install"
-
buildah
Seth Kenlon's "Build your own container on Linux"
-
Kaboxer
Kaboxer - Kali Applications Boxer
-
Apptainer (formerly Singularity)
Apptainer
man apptainer
-
Looking Glass
Run Windows 10 in a VM on top of Linux.
Requires two GPUs, one for the host and one for the VM ?
Looking Glass
-
systemd-nspawn
Run a command in a container, similar to a chroot. Or boot a whole VM ?
"man systemd-nspawn"
Pid Eins's "Running a Container off the Host /usr/"
-
systemd's "Portable Services"
Pid Eins's "Walkthrough for Portable Services"
"Portable Services are primarily intended to cover use-cases where code should more feel like 'extensions' to the host system rather than live in disconnected, separate worlds."
"man portablectl"
-
Containerd and runc
Low-level run-times for containers.
Ivan Velichko's "Why and How to Use containerd From Command Line"
Cloud Native Computing Foundation - containerd
Bundling System Comparisons
Nitesh Kumar's "Comparison: Snap vs Flatpak vs AppImage"
OSTechNix's "Linux Package Managers Compared - AppImage vs Snap vs Flatpak"
AppImage / AppImageKit - "Similar projects"
Verummeum's "Snap, Flatpak and AppImage, package formats compared"
Merlijn Sebrechts' "A fundamental difference between Snap and Flatpak"
TheEvilSkeleton's "Some people think that the problems plaguing Snap also apply to Flatpak, but this is untrue"
Nicholas Fraser's "Flatpak Is Not the Future" (about a lot more than just Flatpak)
TheEvilSkeleton's "Response to 'Flatpak Is Not the Future'"
Liam Proven's "The rocky road to better Linux software installation: Containers, containers, containers"
From someone on reddit:
Snap is hard-wired to Ubuntu and does not contain basic libs that exist in Ubuntu.
Flatpak is designed to be cross-distro, and packages everything.
AppImage contains as many libs as its developer decided to put in it.
Flatpak is designed to be cross-distro, and packages everything.
AppImage contains as many libs as its developer decided to put in it.
From /u/galgalesh on reddit 8/2020:
One of the issues with Docker is that confinement is all-or-nothing. You cannot give Docker containers curated access to host capabilities such as audio, serial, hypervisors etc.
Flatpak has the same issue as Docker in that it's very hard to give applications mediated access to specific host features. The xdg desktop portals are an attempt to solve this, but they require applications to be rewritten to use the new api's. As a result, most Flatpaks run unconfined.
...
Snap heavily uses the AppArmor Linux Security Module for confinement. This is on top of cgroups and namespaces. This allows them to give apps fine-grained permissions to access different capabilities of the host system. This makes some cool things possible:
- Applications like Docker, KVM and LXD can run in a secure container. As comparison:
You can run KVM in Docker, but you need to turn off the container security in order to do that.
- You can give an application access to a USB camera without giving it access to USB sticks.
- You can give an application access to play audio but not record audio.
Both Snap and Flatpak use XDG Desktop Portals which is a new api for applications to securely access things like the filesystem and the camera. This, for example, allows Flatpaks to access the camera without turning off the confinement. The downside is that applications need to be rewritten in order to use the secure api. As a result, most Flatpaks have much of the security disabled.
Because Snap uses AppArmor, it can mediate the existing Linux API's for accessing the camera and other things, so applications can run in a secure container without any code modifications. The downside of using AppArmor is that some distributions use a different Linux Security Module and you can only run one at a time. On Fedora, you have to choose: if SELinux is enabled, snaps will not be confined. If SELinux is disabled, snaps will be confined. Canonical is working with many other devs in order to put "Linux Security Module Stacking" into the kernel which will make it possible to turn on Snap confinement together with SELinux. This won't be finished for a long time, though.
...
> I'm really torn about centralization, or the gatekeeper concept
I personally think centralization and gatekeeping are important. Flatpak tried the decentralized approach initially, but they are now pushing Flathub much more because a decentralized approach has a lot of issues. Ubuntu tried the decentralized approach too, btw, with PPA's. Snap was explicitly centralized because of the lessons learned from PPA's.
With snap, there is no gatekeeping for the applications. There is gatekeeping for the permissions, however. Snaps describe which permissions they want to use, but they do not describe which permissions they are allowed to use. The default permissions are part of a Snap declaration. This is additional metadata also hosted in the Snap Store. Users can override the default permissions themselves.
When you publish a snap in the snap Store, it only has the permissions which are deemed "safe". For example, snaps do not have access to the camera by default because that is very privacy-sensitive. If your application needs a webcam, then you can either try to convince the user to manually enable the webcam or you can go to the Snapcraft forum and ask for additional default permissions. The Snapcraft developers then act as a gatekeeper, they decide which additional permissions are allowed based on a number of documented criteria.
I think this is a really good model. The current issue, in my view, is that Canonical is the only ones who creates the criteria for additional permissions. I think this should be done by an independent committee instead, so that it can remain neutral. Right now, the Snapcraft developers are completely independent of the Ubuntu developers, so Ubuntu has no more power over the Snap Store than other distro's. This is not enough, however. We really need an independent committee.
For comparison,(AFAIK; I'm not an expert in Flatpak), the default Flatpak permissions are set by the Flatpak itself. So Flathub without gatekeepers would not be possible: it would allow anyone to have complete control over your system by publishing a Flatpak on Flathub.
...
> Snap and Flatpak are less secure than distribution-supported software
Indeed, the point of Snaps and Flatpaks is that the packages are created by the upstream developers instead of the distro maintainers. Traditionally, the distro maintainers would make sure that apps are "safe", and you lose most of this advantage by using Flatpaks and snaps. The advantage is that a lot more software is available.
But I think the comparison of "snaps" vs "repositories" is a bit misleading. Most users already install third-party software from PPA's, binaries from web sites, installation scripts etc. If you compare snap and Flatpak to PPA's, they are actually a lot more secure. Even if you completely trust the person who created the binary or the PPA, there is still the issue of stability. The worst a broken snap package can do is "not start". The worst a broken PPA can do is "make your system unbootable".
From Luis Lavaire's "The future of app distribution in Linux: a comparison between AppImage, Snappy and Flatpak":
AppImage is not a package manager or an application store ... AppImage is just a packaging format
that lets anybody using a Linux distribution to run it ...
My evaluation
I wanted two things: bug-reporting straight to app developer (no distro middleman) and per-app permissions (in a way easier than AppArmor or Firejail).
As of 11/2020, I haven't gotten them.
Many container images are built by a helpful third party, not the original app developer. So this introduces a party of unknown trustworthiness into the chain, and just replaces one middleman with a different middleman. Sometimes there are N different images for app A version N, and it takes detective work to figure out which one you should try.
On permissions:
- Docker doesn't help me because I'm mostly not running server apps.
- Appimage doesn't do permissions.
- Flatpak's permission model is strange, has gaping holes ("portals").
- Snap has a very limited permission set for files/dirs: an app can get all of home, all of system files, removable media, or nothing. And apparently it's going to implement "portals" too.
Container Managers (orchestration)
Merlijn Sebrechts' "What's up with CRI-O, Kata Containers and Podman?"
- Kubernetes (k8s):
A popular product for developing, managing, and deploying containers across many machines.
Really only useful for enormous deployments.
"Linux schedules processes on computers. Kubernetes schedules containers in data centers."
Nikhil Jha's "Kubernetes is a container orchestration system, but *that's not the point*"
Ivan Velichko's "Containers vs. Pods - Taking a Deeper Look"
Wikipedia's "Kubernetes"
Steven Vaughan-Nichols' "How to learn Kubernetes with Minikube"
Adnan Rahic's "A Kubernetes quick start for people who know just enough about Docker to get by"
Tanmay Shukla's "Kubernetes for beginners 2022"
Eric Wright's "A Kubernetes Primer"
VITUX's "Install and Deploy Kubernetes on Ubuntu 18.04 LTS"
Nived Velayudhan's "A visual guide to Kubernetes networking fundamentals"
Itamar Turner-Trauring's "'Let's use Kubernetes!' Now you have 8 problems"
Musing in Computer Systems's "Two reasons Kubernetes is so complex"
Canonical's MicroK8s
Some say k8s has terrible security, wasn't designed with security in mind.
Some say k8s has been hyped: falsely claimed to be what Google uses to run everything.
Martin Tournoij's "You (probably) don't need Kubernetes"
- OpenShift:
OpenShift
GUI front-end for Kubernetes and OpenStack ?
You could test-run it by using it to manage a cloud-simulator such as MiniShift. Or is MiniShift a mini version of OpenShift ?
- Rancher:
Stack for managing multiple Kubernetes clusters and more ?
- skopeo:
skopeo
- cri-o:
cri-o
- podman:
podman
Doesn't use a daemon.
Doesn't need root privileges IF the container doesn't need root privileges.
Heard on a RHEL podcast: RH moved from Docker to podman because of concerns about: not enough development/maintenance power inside Docker, security, and a split between the for-profit and community populations of Docker.
Darshna Das's "Using pods with Podman on Fedora"
Yazan Monshed's "Getting Started with Podman on Fedora"
Seth Kenlon's "3 steps to start running containers today"
Surender Kumar's "Podman Desktop, an alternative to Docker Desktop"
Mehdi Haghgoo's "Use Docker Compose with Podman to Orchestrate Containers on Fedora"
Run a podman container as a systemd unit:
- Create the container MYPOD.
- "podman generate systemd --new --files --name MYPOD"
- "systemctl --user start pod-MYPOD.service"
- toolbox:
Toolbox renamed to Toolbx ?
Runs podman under the hood; do "ps -e | grep podman" or "pgrep -a podman" on one terminal while running toolbox on another terminal.
toolbx
Ryan Walter's "A quick introduction to Toolbox on Fedora" (uses podman)
Sk's "Getting Started With Toolbox On Fedora Silverblue"
Daniel Schier's "Build smaller containers"
On CLI, run "toolbox create" to create a container, then "toolbox enter" to enter the container and get CLI inside it. Then you can do normal CLI commands in there. And you can install things inside the container, without modifying your host OS.
- DistroBox:
Started as a project to improve upon Toolbox.
Ravi Saive article
TechHut video
Jorge Castro's "Declaring your own personal distroboxes"
- Argo Workflows:
argoproj / argo-workflows
- Firecracker:
Firecracker
Julia Evans' "Firecracker: start a VM in less than a second"
Jack Wallen's "Monitor Your Containers with Sysdig"
Miscellaneous
Most of these containers have a good/bad feature: they allow your system to have N versions of library L at the same time. That's bad if many of those versions have security vulnerabilities. Better hope that the container's sandbox works properly.
Grype - A Vulnerability Scanner For Container Images And Filesystems
Qubes OS
An operating system with a lot of VMs, running on top of a Xen hypervisor. "Secure" in that the integrity of the OS is protected, apps are protected from each other, you can open dangerous documents and damage will be limited to inside a VM.
There are different domains. Xen runs in domain 0.
There are different types of VMs: disposable, vault (network-less).
There are Template VMs (e.g. Fedora, Debian, etc), App VMs, and Standalone VMs.
The Official templates are Fedora (I think no KDE) and Debian; the Community templates are Whonix, Ubuntu, Arch, CentOS, Gentoo.
Operations (e.g. copying files, cut-and-paste) can be done between VMs, but user needs to give explicit consent each time. Qubes has a Thunderbird add-on that opens attachments in disposable VMs.
You can run Windows 7 or 10 in a VM, with some limitations. You can use Katoolin to install Kali tools on a Debian Template VM.
There is a sys-usb (domain?) for handling USB devices (including microphoone and camera), and you explicitly connect devices to VMs. Similar sys-net for network devices.
No: 3D acceleration for graphics / gaming, Bluetooth, MacOS, Android. As fair amount of the security configuration is CLI-only.
Need minimum of 16 GB RAM to run Qubes decently; 32 GB better ?
Qubes OS
Micah Lee talk 9/2018 (video)
Dorian Dot Slash demo (video)
Russell Graves' "QubesOS: Silos of Isolation"
RIL Labs' "Installing Qubes OS"
Switched to Linux's "Qubes Installation and Configuration" (video)
Jesse Smith's failed install experience (maybe to avoid: "first boot with another OS, delete all partitions so entire disk is 'free space', and then install Qubes.")
Joanna Rutkowska's "Partitioning my digital life into security domains"
Unredacted Magazine issue 3 page 54: Wissam's "The Case for Switching to Qubes OS"
Micah Lee's "Qube Apps: a Flatpak-based app store for each qube"
Jon Watson's "Qubes, Whonix, or Tails: which Linux distro should you use to stay anonymous?"
Whonix's "Anonymity Operating System Comparison"
Jesse Smith's "Types of security provided by different projects"
Thomas Leonard's "Qubes-lite With KVM and Wayland"
Switched to Linux's "Qubes Tutorials: The Qube Manager" (video)
Switched to Linux's "Qubes Tutorials: Device Manager" (video)
Hardware Compatibility List (HCL)
Qubes OS Forum
/r/Qubes
From Joanna Rutkowska article:
"... the essence of Qubes does not rest in the Xen hypervisor, or even in the simple notion of 'isolation', but rather in the careful decomposition of various workflows, devices, apps across securely compartmentalized containers."
A response to Thomas Leonard's "Qubes-lite With KVM and Wayland":
What a great deep-dive into replicating some of the features in Qubes OS. I used Qubes OS for a year, and loved it. It did feel sluggish at times, and video conferencing had too much latency, passing the webcam and mic through a USB qube.
My new system doesn't boot Qubes OS, and I'm not technical enough to build or write my own. However, the ideas in Qubes seeped into my daily workflow. I have a reasonably powerful host system with lots of RAM running Windows 10 Enterprise and VMware Workstation. I keep the host OS as minimal as possible and do all my work in several Linux and Windows VMs. The separation of projects is nice, and VMware's rolling snapshot feature is a good safety net. I even have a disposable VM for surfing the web for research. Video conferencing in a VMware VM is not terrible. It's probably 80% of the benefit of Qubes OS with 20% of the hassle.
From post by Solar Designer:
In practice though, there are severe security risks even with Qubes OS. The in-VM systems need to be updated, and each update is a risk of bringing in malicious code. When most VMs are based off the same Fedora template, updating that means trusting all installed packages' Fedora maintainers. Any package can gain full control through a malicious pre- or post-install script, even if the software it installs is never explicitly used in a given VM. This means instant access to all VMs on next system restart.
From someone on lobste.rs 3/2022:
"There are only occasionally things I want to do but can't: run software that really needs access to a GPU, and share my entire desktop via teleconferencing software. Both of those are tasks QubesOS explicitly will not support - it's a feature, not a bug, in other words."
From someone on reddit 3/2022:
> Can I run applications, such as games, which require hardware acceleration?
Those won't fly. We do not provide GPU virtualization for Qubes. This is mostly a security decision, as implementing such a feature would most likely introduce a great deal of complexity into the GUI virtualization infrastructure. However, Qubes does allow for the use of accelerated graphics (e.g. OpenGL) in dom0's Window Manager, so all the fancy desktop effects should still work. App qubes use a software-only (CPU-based) implementation of OpenGL, which may be good enough for basic games and applications.
General concern (hasn't used Qubes) from Jim Salter on 2.5 Admins podcast 5/2022:
Any connection or shared filesystems you make between VMs, in the name of convenience, weakens the security of the system.
From someone on reddit 10/2022:
"Secure - you have to remember that the stock Templates that the VMs use are not hardened in any way (with the exception of Whonix) they are just stock installs. So running e.g. Debian in Qubes vs bare metal is still susceptible to the same flaws. Where they differ is persistence, in Qubes most of the OS (typically anything outside your $HOME) is ephemeral and when you shut down, the VM is reset the next time you start."
You can drastically decrease the memory footprint if you use minimal templates.
From someone on reddit 12/2022:
the maddening truth of using Qubes
Qubes is a security-oriented operating system. It's now fairly mature. It brands itself as being 'reasonably secure'. It's based on the Xen hypervisor, and Fedora, Debian and Whonix OSes. I'll let you do your own reading if you want to know more.
It's not without problems, though. Security is inversely proportional to convenience ( <- apparently from the first Unix handbook). This is a good summary of Qubes. I find it can really disrupt workflow.
This is an account of Qubes for prospective users.
I've been using Qubes for 5 years. I'm not technical, I'm not a power user. I am not using high-end hardware.
I run Qubes on a Lenovo Thinkpad x230 (with Coreboot and ME_Cleaner applied): i5-3320M CPU @ 2.6GHz, 16 GB memory (the maximum it can take), with an SSD.
I make comparisons to my second, less powerful machine, a Lenovo Thinkpad x230 i3-3120M CPU @ 2.5GHz, 12 GB memory, with an SSD. It runs a normal Debian-based distro.
It's important to know as you read this - if you don't already - that Qubes achieves its security by isolation. It gets that by compartmentalizing everything in separate virtual machines (VMs).
Irritations that make me question my choice to run Qubes:
Slow
Here's an example: I am launching my Firefox instance (with existing tabs from a previous session) in my Personal VM (virtual machine). This VM uses a VPN, so that will automatically launch as well to provide networking, as a prerequisite to Personal VM. Here's the timing:
- 0s Q-button > Personal > Firefox to launch
- 1m 10s VPN VM is up and running.
- 1m 55s Personal Firefox window appears, but tabs aren't yet reloaded.
- 3m 09s Uppermost Firefox tab now displays/reloaded content.
To test this, I've just done that all again but with a different VM, after a restart of the machine: 3m 09s again. Now for a third time: this time only 1m 28s. I can't point to any factor that could cause the difference.
That's an average of 2m 35s to reopen a web page in Firefox.
Firefox is an offender here. I have typically 10-20 tabs open at any one time and 10 extensions (Firefox Recommended).
Opening a Firefox instance with no extensions, in a fresh, stock Fedora-based VM (i.e. a regular disposable VM) that doesn't use a VPN to connect to the internet, takes 1m 32s to get to the Firefox start page.
If I open Personal VM again - but this time with the VPN VM already up and running - it takes 46s to start the machine, but Firefox displayed a fully functional tab at 1m 00s.
My 'normal' laptop takes mere seconds to open Firefox with a VPN running.
Boot up to 'ready' - including the disk decryption and the login - takes me around 3m 10s. Just for the record. Getting to a working VM from a powered-off stance can take around 6-7 minutes.
So there you have it: nothing is quick and sometimes it's very very slow. Security is inversely proportional to convenience.
Focus hijacking
Focus hijacking is what I call Qubes' habit of opening its VM right over the top of whatever you are working on. Because of that, because it takes so long to open things, but because it will open the window long before it has finished setting up the application (e.g. browser tabs), it's pointless to get on with something else while you wait. You'll just be interrupted, like someone shoving a newspaper under your nose while you are writing something. Maddening.
Updater
Qubes has a special app for handling updates (after all, it has at least 5 different 'guest' OSes to keep updated). Usually once a day there is an update to at least one of the distros Qubes uses (usually Fedora). The updater app is slow, clunky and resource-heavy. My fan starts running every time, and sometimes the system becomes visibly sluggish. [Someone responded: maybe because your CPU has only 2 cores.] The Updater requires a manual start, and once started it will run to completion - the 'cancel' button doesn't seem to work at all. It tends to run ~5-10 minutes (a guess, not measured). You have to deal with this pretty much every day. [Someone responded: just update once a week.]
Once you've updated, all the individual VMs that use that OS need to be restarted. They need to be shut down and started in order (e.g. application VM shutdown, then VPN shutdown, etc, etc then VPN restarted, then application VM restarted). It's so time-consuming that it's best to shut down and restart the entire machine. Since boot-to-ready takes me 3m 10s, getting back to where I was working can take around 7 minutes.
Connections
Qubes is built on clever isolations. It's probably no surprise that connecting anything is just harder in Qubes. That includes: VPNs, SSH, printers, cloud services (at least using them for backups) and USB devices. Some devices will just never work, it seems, like smartphones (I've tried 3). So don't throw out your other laptop!
Doing something new, trouble-shooting and problem-solving
Basically, Qubes adds another layer of complexity to whatever you are doing. Your resources for figuring things out are reduced to the documentation (not always what you need) and the community (good but has its limits). This compares with the oceans of Linux-relevant material available for almost every other distro - it's a big difference.
Lock-in (sort of)
With Qubes you split your activities across different VMs. With just a handful of these VMs, each with its own files, (e.g. documents, spreadsheets), and browser with bookmarks, histories, etc., you effectively have several independent computers' worth of stuff to back up. Were I ever to transfer it, not only would it be tedious to extract from Qubes, assembling all those Home folders and subdirectories into a unified system on a normal distro would be a real pain.
Qubes offers its own backup tool (again, it's not quick but I don't think we should expect it to be). It creates backup files that can easily restore to a new Qubes system. But browsing those backups and extracting information from a normal computer may not be so simple. Qubes specifies a particular archive software (available through most distro repositories) to accomplish this, but I have never done it.
There's also a conceptual lock-in. Going back to a normal distro is kind of like re-emerging on Flatland after a period in 3D. Gone is the separation of activities, no comforting security wall of virtualization, nothing. You open your password manager in a normal distro and it is on exactly the same system as your web browser - it makes you pause and think, 'is this safe?'. Indeed, was normal computing ever safe?
Nothing here is enough to make me quit Qubes.
I'd say if you are going to try Qubes, use a system that has more than 16 GB RAM. And it would be a mistake to not have a 'normal' computer around as a just-in-case.
Qubes is great, and reasonably secure, but all too often I find it inversely proportional to anything remotely resembling convenience. Perhaps these gripes can be viewed as the 'price of entry' to that security. That's your choice to make, but make sure it's an informed choice.
> Focus hijacking
That can easily be solved by using i3wm in dom0 and configuring it to only transfer focus on user command and ignore the mouse for focus.
> 3m 09s. To launch a web page. That sux.
To me it looks like there's only about 1~2m attributable to Qubes itself. Are you turning on multiple qubes in cascade by doing this?
Having them turn on at boot-time would mitigate that delay (there's a setting for that).
If I run a prepared debian-based firefox dvm (boot the dvm and start empty firefox) it takes me ~35s, which is a lot less.
> Boot up to 'ready - including the disk decryption and the
> login - takes me around 3m 10s. Just for the record.
> Getting to a working VM from a powered-off stance
> can take around 6-7 minutes.
That is a downside of turning on a lot of qubes at boot-time.
So instead I just have the sys-usb starting, nothing including networking usually. That'll be started once I have the main UI available, by starting one of the long-running qubes that does depend on networking and a few things.
> The updater app is slow, clunky and resource-heavy.
From qubes-qube-manager you can use a terminal-based updater instead using the "update" down-arrow. It will show what is happening during the update.
Regarding the heaviness, that isn't exactly the updater. It's because the GUI-based updater spawns disposable management qubes for actually doing the update using Salt to command the qube you're actually trying to update to do the updating, which means you have both the disposable qube and the qube to update booting, running and shutting down at nearly the same time. I think the terminal-window one directly calls Salt without an intermediary.
> Once you've updated, all the individual VMs
> that use that OS need to be restarted.
Not exactly, you just have to directly kill the dependency qubes like the sys-net and sys-firewall so you can restart only those.
I'd recommend not killing template qubes if you can avoid it. app qubes and dvm qubes are largely non-persistent outside of specific hierarchies that have nothing to do with updates and system function, while template qubes do persist all of that sensitive stuff.
Both the terminal updater and the GUI updater allow you to see what was updated, which can be useful as it determines whether you need to bother restarting all or even any qubes for the update to immediately take effect.
I recently switched over my work laptop to Qubes.
Here's things I've learned:
- Making the updater use the mandatory proxy at work, or even just updating packages
at work, is a massive pain in the ass / impossible. I've tried a bunch of different
settings to make the proxy configuration work, but I can't seem to find the correct
magic incantations to resolve my issue.
- Kernels. My god, I have to have at least two different kernels to make my system
function. Due to my network interfaces, sys-net requires 5.16 or newer, whereas only 5.15
was available by default. So I installed kernel-latest, so now I have 6.0.2. Unfortunately,
5.15 and 6.0.2 don't seem to be stable, and my system would randomly just lock up and restart
with them. 5.15 was less stable than 6.0.2, but still. Somebody posted that 5.10 works for them,
so I installed that, and that seems to work (for now).
- Graphics acceleration. ~Full HD works with just brute force, but going past that does not,
not even on 12th gen Intel. It's a pain in the ass also if you use more than one monitor,
and you don't set your minimum video RAM large enough. You'll be trying to give input
to a program/vm that just wasn't able to resize itself, and nothing works properly.
This is pretty annoying, but the 'not being able to play back full-screen video' is even
more annoying, because it has no available fix.
- Sleep/Hibernation. Sure, it works on "validated systems" but that's mostly only
older devices, and if you have a newer system, good luck with that, it could work,
or it could not. Just don't try to use sleep.
- USB devices. Yeah, phones have enumeration problems since trying to tether or whatever,
the phone sees the connection disconnect momentarily before it is reconnected to the VM.
Unfortunately phones just go "uhh nope" if they're disconnected. Trying to pair a unifying
dongle is also pretty painful. You can't do it in dom0, but if you attach the receiver
to a VM, the inputs no longer go to dom0 either, so you can't see your mouse cursor.
Better hope you have multiple input devices!
- Non-USB-based removable storage is difficult to handle, and there's the problem that
if you accidentally click fast enough on two different VMs in the 'connect to'-menu,
you could corrupt the filesystem when you connect it to two VMs. (this shouldn't be
possible, but thanks to a bug, it is!)
From someone on reddit 12/2022:
"I keep trying Qubes and quitting because it makes everything harder to do and slower. To me it makes more sense to segregate sensitive and day-to-day tasks to different machines and leave it at that."
From someone on reddit 12/2022:
"I don't even see a reasonable point in using Qubes. If you are technical enough to use Qubes, you would likely run a normal Linux distro hardened correctly."
Whonix
Whonix
VMs that you run on top of a host OS.
Separates the network access from the rest of the system, using two VMs.
Routes all network traffic of the Workstation VM through Tor/onion.
Drops all UDP traffic; can't do it unless you install a VPN, which negates the value of using onion routing (for that traffic).
Debian-based.
From Whonix Wiki:
"Whonix uses an extensively reconfigured Debian base which is run inside multiple virtual machines (VMs) on top of the host OS [which could be Windows, Linux, MacOS, more]. ... Whonix consists of two VMs: the Whonix-Gateway and the Whonix-Workstation. The former runs Tor processes and acts as a gateway, while the latter runs user applications on a completely isolated network."
So Whonix Gateway (and onion network) is not handling traffic from applications you run in the host OS; it's handling traffic from apps you run in the Whonix Workstation VM.
Typically the Workstation is running an Xfce DE.
Prateek Jangid's "How To Install Whonix on Ubuntu"
Mental Outlaw's "How To Use Whonix" (video)
What is an 'immutable' system ?
A better term would be something like "atomically-updated OS with read-only root filesystem". The OS and applications and user files are not unchanging, they just are changed in very different ways. Usually the OS is updated as a unit, applications are in containers, and user files can be used as in any other distro.
From Mike Kelly article:
An immutable OS has a core system partition that is fully protected and mounted as "read only". You can't mess with it even if you wanted to. When you download an update, you're just getting a new big image, swapping it out, then booting into the new one. If something goes wrong, you just roll back to the old image and try again.
All of your files and apps live in a separate partition and are all little "containers" that sit on top of the base OS image. In this way, your apps and files can never take down your system. An app might break, but you'll always have a working system.
If this sounds familiar, it's because ChromeOS, Android, iOS, and even MacOS work like this now. ...
From someone on reddit:
There are a tonne of immutability approaches, such as image-based (such as Silverblue), snapshot-based (such as microOS, the one I'm using), A/B partitions (such as VanillaOS and Android), and almost-immutable (such as the old VanillaOS).
But all of these have problems: you can't modify image-based a lot, the snapshot-based ones aren't reproducible (and neither are those with A/B partitions), A/B takes a lot of space, and almost-immutable loses most of the benefits of immutability. But there is one more that (AFAIK) no distro uses yet: overlays, which I think Pop!_OS might go with.
...
NixOS is like learning a completely different OS.
I tried it but so many things were so different from a "regular" distribution that I couldn't keep using it. No hate, but it's just not my cup of tea, and I imagine it would be even harder for new users.
Solene Rapenne's "Introduction to immutable Linux systems"
Awesome Immutable
Fedora immutable
Silverblue (GNOME), or Kinoite (KDE), or Sericea (Sway), or Onyx (Budgie).
"Immutable" system. OS base files are updated in one packaging system (rpm-ostree), in whole-system updates. Apps are installed as Flatpaks, although native apps can be installed into the OS tree (discouraged). Then you have "toolboxes" (containers), which appear as various versions of Fedora.
Main selling-point seems to be increased stability because of the atomic system updates, separate package-systems, and containers.
From Fabio Alessandro Locati's "My immutable Fedora":
"... with an immutable OS, when the OS is running, the OS filesystem is in read-only mode. Therefore no application can change the OS or the installed applications."
Apparently /etc and /opt and others are read-write. And there is a way to install local packages that override OS packages.
From someone on reddit:
The general thought process is:
1) use rpm-ostree to install system packages. This might include packages for managing disks, adb, an alternative terminal, etc.
2) use toolbox, or a container, for non-system packages. This could include rsync, ffmpeg, python-img2pdf, a database, etc.
3) use flatpak for normal desktop apps.
"/opt and /usr/local are mutable directories (actually symlinks), so they're there for you to use as you see fit." So I can keep installing Thunderbird beta to /opt when I move to one of these distros.
Firefox comes installed in the base image. If you wanted to use a Flatpak of FF instead, you could write a .desktop file and set "NoDisplay=true". This'll effectively hide the Firefox icon. Then "alias firefox='flatpak run org.mozilla.Firefox'".
DorianDotSlash's "Fedora Silverblue could be the future!"
Josphat Mutai's "Manage Packages on Fedora Silverblue with Toolbox, rpm-ostree & Flatpak"
Muhammed Yasin Ozsarac's "How I Customize Fedora Silverblue and Fedora Kinoite"
SimplyTadpole / KinoiteSetup.txt
Yorick Peterse's "Switching to Fedora Silverblue"
/r/silverblue (restricted)
/r/kinoite (low-traffic)
/r/fedora
Fedora Discussion
From someone on reddit 3/2023:
You use this software a lot and you want to get it from the Fedora repo? You layer it [rpm-ostree].
You use this GUI app occasionally and you don't really care if it is isolated from the system? You install the Flatpak.
You need a CLI utility and don't mind a containerized env? You make a toolbox.
Vanilla OS
"Immutable" system. Was Ubuntu-based, now moving to Debian-based. GNOME DE, Apx package manager.
Installs apps (from many repos) as podman containers ?
Probably alpha-quality in 1/2023, installer may crash.
Jack Wallen article
Bobby Borisov article
TechHut video
Ubuntu Core
openSUSE MicroOS (configs: KDE, GNOME, server, container host)
rlxos
Endless OS
Self-hosting
Typical terms / cases:
- Home Lab - simple: test web servers, configuring file-sharing, hobby projects, etc.
- Home Lab - complicated: try clustering, Kubernetes, Ansible, etc.
- Self-Hosting: run a web server, NAS, media server, etc for real people to use.
- Dedicated system: DVR recording security cameras, alarm system, etc.
- Home Automation: control lights, HVAC, locks, etc.
Reasons to do it:
- Learn how to install and run servers and services.
- Share files/services among home users and family/friends.
- Resilience: (if on LAN) files/services still available if internet goes down.
- Resilience: (if in cloud) files/services still available if your house has a disaster.
- Control: avoid email/social accounts closed if service doesn't like something you said/did.
- Privacy: files/services kept on server you control.
Self-hosting is not for everyone. I live in an apartment with family, no room for server(s). And we all have different tastes in music / video / etc, no use for a shared media server. They refuse to use a password manager, so no opportunity for a shared database for that. Ad-blockers in browser work okay and keep working if we take laptops on a trip, no need for a Pi-hole.
Locations:
- On-premise: on a server in your house, on your LAN.
Requires: buying server, having space, electricity, cooling, maybe a UPS, maintenance, backups, maybe opening ports into your LAN.
Gives: best performance for users on LAN. - Cloud: on a virtual server (VPS) or dedicated server in a data-center.
Requires: monthly fee.
Gives: best performance for users on internet; easy scalability; less maintenance.
Typical software:
- Services to users:
- Web server (nginx, Apache, more).
- Blog server (Wordpress, more).
- Email server. Email self-hosting
- Password manager server. (Bitwarden Unified article)
- RSS feed server.
- Chat/messaging server (Matrix, more).
- Game server (Minecraft, more).
- Photo/video gallery.
(Linux Unplugged podcast episode 409) - Nextcloud: file-hosting, media-streaming, photo gallery, calendar, contacts, RSS, bookmarks, more.
- FreeNAS: file-hosting, media-streaming.
- Synology: file-hosting, media-streaming, video-recording. Large app-store, nice UI, lots of things are push-button and very easy-to-use. Maybe not for someone who wants to learn all the details of how to set up VMs and Docker etc.
- Plex: media-streaming.
- Jellyfin: media-streaming.
- Terramaster:
NAS.
Kevin Norman's "Declouding my life - Replacing Google Photos" - Outbound VPN client (in router ?).
- DNS ad-blocking (Pi-Hole).
- Infrastructure:
- Proxmox: VM/container management platform.
- Unraid: NAS, app server, VM management platform.
- Inbound VPN server: remote client machine gets full access to LAN.
openSUSE's "Configuring a VPN server"
Anarcat's "Debating VPN options"
The Changelog's "Easily Accessing All Your Stuff with a Zero-Trust Mesh VPN"
Tailscale.
headscale - Reverse proxy: routes inbound requests to appropriate servers. Nginx, HAProxy.
- DNS.
- Backup.
- Monitoring (Prometheus ? Netdata [but turn off Google Analytics] ? uMon ?).
- Intrusion Detection (IDS article).
- Router/firewall (pfSense, more).
- Identity management (OpenLDAP, Authentik, Keycloak, more) ?
From someone on reddit 2/2021:
Self-hosting lessons learned from over the years...
- Keep it simple
Every time I create a very complex setup it always comes back to bite me, one way or another. Complex networks, or complex configs, custom scripts and other hacks just add to complexity and make it difficult to maintain whatever it is you're putting together. Complex stuff also demands very good documentation so you can remember what the hell you did three months later when something stops working. If it's something simple, a few notes and a link to some official doc might get you going quick in the future. - Enterprise hardware is not a must
I've bought used enterprise servers before, but the outdated CPUs and the power consumption costs made me realize I can do more with a lot less after I was annoyed and started researching alternatives. Back in 2020 one of my goals was to replace my enterprise crap with small/low-power servers, so I settled with Dell 5060 thin clients and a couple of APU4s from PCEngines. There are plenty of other options out there, NUCs are very awesome too. My only 2 enterprise servers are my pfSense firewall at home and my colocation server at a local DC because it was required in order to host it there. - Take notes, document and add comments to config files
You don't have to be a professional tech writer, but simple notes related to each server, quick steps for replicating the config and some comments in your config files will definitely help you remember how stuff is running. When I change a config file somewhere, I usually add a note with a date and reason why, or quick explanation. When I go back to it 8 months later I don't have to try to remember why I did it. - Not all tutorials and how-tos are of the same quality
A quick web search will give you tons of how-tos and tutorials on how to set something up. I've had the bad luck of following some of these in the past that had terrible decision-making, didn't follow best practices and was just all around a crappy tutorial, even if it was well written. Now I follow official documentation whenever possible, and might take a look at other tutorials for reference only. Not only that, tutorials can become outdated, whereas official docs are typically kept up by the devs. - Everything behind firewall/VPN if at all possible
Opening up your services to the outside is risky for multiple reasons, and requires your stuff to be updated constantly, plus you should know about zero days, common exploits and mitigations, bla bla bla, etc. This is a huge time sink and if you have to be doing this kind of stuff, you should be getting paid for it :) - Reverse proxy is awesome
A well-configured reverse proxy is an easy way to host multiple services behind a single server, public or not, and to me seems easier to manage than to have to keep track of all my stuff separately. It's also a cheap way to park domains, redirect domains and have auto-renewals for your SSL certificates (and to force HTTPS). My suggestions are Caddy v2 or Nginx Proxy Manager (nice little GUI). Good ol' NGINX by hand also works great. - Adding new services out of necessity vs for fun
At certain points in time I've had tons of different services running, especially since there are so many cool projects out there. I am tempted to spin up a new VM/container for some new shiny app, but find myself not using it after a few weeks. This snowballs into a massive list of different systems to maintain and it will consume a lot of time. Now I only host stuff that solves a real big problem/need that I have, that way I only have to worry about maintaining a few things that are really useful to me and are worth the work. - Backups
Have a good backup system, preferably located elsewhere than your main home lab. You don't really need to implement a full disaster-recovery system, but having copies of important config files, databases and your notes/docs is very useful. I run a lot of stuff in Linux containers, so snapshots and lxc backups are also very useful and can save you time if some change or update breaks something. And if you have those configs/files saved away also, it makes it even easier.
Deny access from all external IP addresses, then whitelist IP addresses you want to allow access from.
From 2.5 Admins podcast: if family/friends are going to log into services from outside the LAN, don't expose the various service login pages to the open internet. Instead, set up a Wireguard connection from each friend's machine to your LAN. That means anyone who needs to get in will have an automatic connection with a good installed credential/key, before they get to see any login pages.
If all you're going to do is file-sharing from server to clients, you don't need a full NAS such as FreeNAS. You can just use Samba or other standard facilities of your server's OS. Using a NAS would add things such as web UI for administration, management of ZFS pools, maybe VPN, plug-ins for other stuff.
TheOrangeOne's "Exposing your Homelab"
CyberHost's "Where to start with Self-Hosting - A Beginners Guide"
Joshua Gilman's "Self-Hosting Primer"
Mike Royal's "Self Hosting Guide"
George Gedox's "Intro to self-hosting"
TheOrangeOne's "LAN-only applications with TLS"
TheOrangeOne's "Securing public servers"
pwn.recipes' "Don't mindlessly follow security-gospel"
Josheli's "A New, Old Hobby: Self-hosted Services"
Leon Jacobs' "building a hipster-aware pi home server"
Hayden James' "Home Lab Beginners guide - Hardware"
set-inform's "Don't use a Raspberry Pi for that!"
Ctrl Blog's "What domain name to use for your home network"
TheOrangeOne's "Backing up and restoring Docker containers"
Marian Mizik's VPS / self-hosting series
reddit's /r/selfhosted
ServeTheHome's forums
Simple ways to get started:
Run all apps in same system:
YunoHost
Cloudron
Run apps in containers:
HomelabOS
Sandstorm
Inbound:
Reverse proxy: have one server (usually web server) on the LAN handle lots of incoming requests from the internet on one port (usually 443) and route the requests to various other servers on the LAN, thus hiding internal details from external clients. Can do sophisticated things such as load-balancing, auhentication, etc.
Port forwarding: rules in the router or firewall so incoming traffic to various ports gets redirected to particular IP addresses and ports on the LAN, thus hiding internal details from external clients.
Cloud hosting:
From people on reddit 2021:
Azure is for big business, and too complicated.
AWS and GCP also for business.
Self-hosters are better off with Digital Ocean, Vultr, or Linode.
Matt Fuller's "So You Want to Use the AWS Free Tier"
Expedited Security's "Amazon Web Services In Plain English"
AWS container diagram
JSLinux - bellard (run OS in your browser)