I like Linux very much; let's make it better !

Linux just



Linux is suffering from Fragmentation



DistroWatch's "Search Distributions" lists 873 distros as of 7/2021 (but I quickly found about 5 more missing from the list, and elsewhere they say "923 in the database"). 249 of those have status of "Active" (elsewhere they say 274). "We receive an average of 2 - 4 distro submissions each week (every Tom, Dick and Harry has now created one) ..." An additional 150-170 queued distros listed on DistroWatch's "Submit New Distribution"

DistroWatch database stats chart
Those numbers don't include corporate in-house forks of kernels/distros.
From Mike Melanson article:
+/-
... many of these organizations are caught in a seemingly endless cycle of trying to keep up with the latest updates, often spinning their wheels and expending effort to fix issues within their own forks of the Linux kernel. The struggle comes when trying to find a balance between staying up to date with all the latest updates, or just updating the most "important" updates. Part of the issue here, Cook points out, is that even the stable kernel releases, which provide "bug fixes only", each contain close to 100 new fixes per week. As such, many organizations find themselves more and more out of date on their particular branch, creating their own fixes.



Chart of 400 or so distros of Linux

Diversity is good and necessary, but we're not doing it in a smart way. Instead of copying everything about a distro to make a completely separate project, better to stay within the original project and fork/add just a few things.

Do we need diversity everywhere ? How does diversity of package formats and package managers really help us ? With Linux phones struggling to be born, and late to the market, do we need to start with 20 separate phone distros right away ?

Desktop Linux is 400+ distros flying (as much as penguins can fly) in loose formation.

We should try to shift the culture toward some consolidation instead of everyone creating new distros and apps. Who needs 400 distros and 40 different tweak-OS-settings apps ? How about 20 and 3 ?

We shouldn't "stop" anyone from doing what they wish. But we should persuade the leaders of the major projects to put more effort into commonality, code-sharing, standards, APIs inside the system.

[Some argue that only a few distros and DEs really matter, and there's some truth to that, but still there are a lot of combinations: Dedoimedo's Linux distro dependency graph (from 2013, omits a few such as Elementary OS, and skips a lot of variables)]



Prices we pay because of fragmentation

+/-
  • One price we pay today for all the fragmentation is bugs and slow development.

    Have you run some of the standard GUI apps from CLI, and looked at the error messages that appear in the CLI window ? Assertion failures, broken pipes, use of insecure or deprecated APIs, more. The quality of many major apps on Linux is bad.

    Have you looked at the output of "sudo dmesg --level=emerg,alert,crit,err,warn" or "sudo journalctl -xp 0..3 -b" and seen all the error messages that appear, even in a "correctly working" system ? It's alarming.

    Linux laptop sleep
    Suppose much of the effort put into tweaking and packaging and testing and delivering and supporting many of those distros was instead put into bug-fixing in the couple of dozen major distros ? Bugs would get fixed faster. New features would get created faster.

    The complexity of the forking and upstream/downstream and LTS/rolling dichotomy means that bug-reporting and bug-tracking are harder than they should be. Often a report filed against, say, Mint is closed with "probably an upstream bug, you go figure out somewhere else to file it".

    One effect of dilution of effort is missing features. We have 20+ DEs. Most of them are missing good accessibility (Devin Prater's "Linux Accessibility: an unmaintained Mess"), and good internationalization. Even locale settings are incomplete in some cases.

    Today we have tremendous duplication and dilution of effort. It makes devs and the whole community less effective. It seems every Linux project is saying "we need more devs !". (e.g. Debian: "we need about 2-3 times our current work capacity" from article1, also see article2) Same thing the general computer security industry is saying. This is not sustainable.

    My sure-to-be-unpopular suggestions to Debian:
    +/-
    • Work to bring derivative distros (and their devs) back into the main project. If some distro forked off Debian simply because they wanted a different set of default apps or something, maybe make "default set of apps" an install-time configuration choice (apparently a couple of mechanisms for this already exist: "tasksel", and meta-packages). Reverse some of the forking, somehow. Get those devs back.

    • "Today there are over 61,000 amd64 binary packages in Bullseye, the forthcoming release." So push more of the work back onto the app devs: use flatpak or appimage or snap or docker instead of native packages, where possible.

    • I haven't thought this one through, but: Redefine Debian from distro(s) to platform. Have Debian concentrate on everything below the DE and distro-UI level, so user has to go to a Debian-derived distro (e.g. Ubuntu or MX Linux or LMDE etc) to get an ISO, which has an installer, DE, default apps, settings, etc.



  • Another price we pay today is confusion among vendors and potential new users. With Windows or MacOS, a user or vendor has a very small and easy choice of what to use or support. Then they can customize on top of that.

    With Linux, there are dozens of major distros and maybe 300 total active distros. A new user or vendor is faced with an intimidating variety. And whatever subset they choose to support, the other 80% of the community will criticize them. Easier just to avoid Linux than to deal with that. Linux has somewhat-poor support for some graphics, Wi-Fi, and Bluetooth partly because of that, I think.
    Year of the Linux desktop

  • Friction. I'm thinking of moving from one distro to another, probably to a different DE. But I like one or two of the default apps in my current distro; they're better than any alternative I've found. Turns out they're custom-built for my current distro (Mint), and built using things (XApps) that may not be available in other distros or DEs.

    Possible friction points, barriers to moving:
    +/-
    • Your muscle-memory will be wrong: things will be in different places in the GUI, keyboard shortcuts may be different.

    • Some CLI commands (package manager and init, mainly) may be different.

    • Your favorite app from old distro may not be available on new distro, especially if it was one of the default apps written/forked specially for that distro.

    • Some system GUI apps may be quite different, especially the installer, the software center/store/manager, the update manager, the system settings manager.

    • Sort-of-bleeding-edge stuff (WireGuard, ZFS, Wayland) may be supported or not, especially by default and/or in the installer.

    • Some types of packaging (snap, flatpak) may be supported by default or not, especially by the software installer and updater.

    • Some of your little scripts may have to be fixed because system files may be in different locations or daemons may have different names.

    • Desktop icons or widgets may be different or unavailable if you change DEs.

    • If you're changing between LTS and rolling-release, or between distros of different ages, you may find differences such as Python 2 no longer supported, or standard repo contains older versions of apps than you used on previous distro.



  • GUI inconsistency. Since various apps, tools, utilities and system features are parts of different projects, and often built using different frameworks, there is no consistent or easy "theming". A particular app or piece may be built on GTK (2.0 or 3.0), Qt, Java, Electron, other. Then it might be packaged inside some container (Snap, flatpak, appimage) potentially affecting settings. There is no one place to say "make the scrollbars for all things 20 pixels wide". The layouts and functionality of open-file and save-file dialogs may vary from app to app. Some apps that support printing have print-preview and print-settings dialogs, others don't.

    Three-headed knight from Monty Python's Holy Grail
    Where are things such as "system language" managed ?
    Commands "localectl" and "locale", files /etc/locale.gen and /etc/default/locale and /etc/locale.conf
    Aaron Kili's "How to Change or Set System Locales in Linux"

    Where is "system date/time format" managed ?
    Command "timedatectl", files /etc/localtime (binary) and /etc/timezone.

    To make things easier for cross-platform applications, can we copy a facility from Windows or macOS or BSD ? I'm told no, they're even worse.

    Relevant Ubuntu bug report 2016-05: Graphical snaps don't honour the desktop theme


Paraphrased from Dmitry Vyukov's "Syzbot and the Tale of Thousand Kernel Bugs" (video) (9/2018):
+/-
Every new distro represents a forking/multiplication/replication of the existing bugs in the original code of kernel, user-space code, and apps. Many of the distros "handle" the huge steady flood of bug-fixes and security fixes from upstream by ignoring it: freezing on a specific release of the upstream distro or kernel. This keeps bugs (including security holes) in place for years, to bite people again and again.



From discussion 2/2020 on /r/windows:
+/-
> What made you switch back to (or come to) Windows
> as your primary system after using Mac/Linux?

I used to use both but realized if you want to consume multimedia content comfortably you have to have Windows.

Linux is great for servers and stuff like that but as a daily OS it sucks. I tried bunch of different distros but there always seemed to be an issue with drivers, apps or compatibility. Ubuntu, Mint, Debian variants, Fedora. They work ok but there always seem to be some form of tinkering requirements on a regular basis. Ain't nobody got time for that!

...

Linux is good if you want to waste 3 days getting your graphics card to work with 3D function. Seriously, f*ck Linux. Waste of time system on a PC.

...

Before switching to Windows 10 I tried giving Linux a final chance because I was going to wipe my system anyways. Ran into a driver issue with a RAID card. Downloaded the driver from Intel ... and it was in the wrong package format. F*ck Linux. Constant problems like that.

Edit; fix your goddamn stupid driver support instead of creating a new distro every week!

Linux audio

From someone on reddit 5/2020:
+/-
Linus Torvalds On Future Of Desktop Linux (video; see minutes 3-5) (6/2019)

The creator of the Linux kernel blames fragmentation for the relatively low adoption of Linux on the desktop. Torvalds thinks that Chromebooks and/or Android is going to define Linux in this aspect.



Linux

From Chris Fisher on Linux Unplugged podcast episode 358 at 1:02:15:
+/-
There are so many areas where it feels like you're running a desktop environment on top of a command-line environment which is running on top of a kernel. You can feel that stack, sometimes.



From someone on reddit 5/2019:
+/-
My opinion is that the very things that we Linux users love about our platform are the same things that have prevented it from becoming a real contender as a viable desktop alternative to Windows or macOS. Linux is all about choice. Unfortunately that choice has splintered Linux into 300 active distributions. There are now at least 18 desktop environments. There are over 40 music players alone to choose from. There are even more than 20 sys init systems. The choices go on and on and forks, which can be done by any person or group, add even more confusion. Can you imagine trying to manage a help desk for mainstream Linux users who are lay people who purchased a computer running Linux pre-installed? It would be a nightmare! Sure there are some hardware vendors who ship Linux systems but those are aimed for developers and Linux geeks like us, not for mom and dad.



From someone on reddit 9/2020:
+/-
Just because you can do something, doesn't mean you should. Often it would be better for the community if devs of projects would work together.

For example, there used to be 3 GTK-based Desktops. GNOME 2, Xfce, and LMDE.

When GNOME 3 came out, much of the community didn't like it. So as a response they made MATE, Cinnamon, Unity, Deepin, Budgie and Pantheon.

In a short time we went from 3 GTK-based DEs to 9 GTK-based DEs.

Seems a whole lot of repetitive work, and that 6 desktops really weren't necessary. If those devs had worked together on one or two new DEs instead of 6, how much better and more advanced could those be ?



From someone on reddit 9/2020:
+/-
> What do you dislike about Linux ?

The smug attitude that some Linux devs/users have that Linux is somehow superior to Windows or MacOS. It's not. It's forked into infinity with hundreds of different attempts to solve the same problem (and for the most part - failing). The UI is inconsistent, the look is mediocre at best, and the applications for the most part are appalling (at least on the desktop). Yet with all that crap - the Linux gang thinks they're the second coming of OS Christ. Get over yourselves, Linux is a tool, for some jobs it's a good tool, but it's not a religion, it's not the best, it's just a different approach to software that Windows and MacOS has mastered many many decades ago and Linux still thinks they will (maybe, but not today).



[Even what may seem to be a single project may not be:]
From someone on reddit 6/2020:
+/-
In GNOME, just like in most other OSS projects, there is no "leader" that decides while the rest listens. GNOME itself for example is just a collection of projects, where each maintainer for their own project decides what to do with it. They agree on some things like a release schedule, and try to follow the GNOME HIG, but that's basically it.



From Dedoimedo's "Linux Mint 20 Ulyana Xfce - Not quite there":
+/-
The startup sequence into the live session was relatively tame - I'm talking about what happens from BIOS to desktop, and so far we've seen every single distro doing it ever so slightly differently. Every single one. ...

...

Fonts remain a big problem. Among the 9,000 distros out there, one or two manage good, clear, crisp fonts out of the box. ...

...

the Welcome screen has its own window decorations that are different from the rest of the system. I guess some hybrid mix of GNOME and Xfce and Cinnamon.

...

... I am going to show you how to change font color in the MATE desktop, too, very soon. This is all very similar - GNOME, MATE, Cinnamon, Xfce. And funnily, they all require editing CSS files manually. In KDE, this is a simple, friendly, built-in thing ...

...

[Mint 20 Xfce:] ... The problem is, the differentiating factors by which the Linux desktop could once sway hearts and create hope - especially for wavering Windowsers - are long long gone. So having a decent desktop that checks some boxes simply isn't enough. Mint 20 Xfce is fast and does most of the basics reasonably well.

But then, the ergonomics are off, the printing thing is weird, the software selection can be better, there are quite a few rough spots, and at the end of the day, there are few super-awesome features that would distinguish this system over dozens of other Linux distros. But as long as there's no ultra-rigorous QA across the entire ecosystem, as long as even simple things like the boot sequence or fonts cannot be taken for granted, the Linux desktop will not be the "killer" replacement for Windows. ...

From Dedoimedo's "MX Linux MX-19.2 KDE - Missing in action":
+/-
There are already way too many distros, distro spins and distro editions out there. Roughly 90% too many. Even maintaining a single version can be tough, for small or large teams alike, and splitting thin resources to create an extra edition make things even worse. Finally, what's the actual benefit? Is this going to sway the Windows masses or revolutionize the desktop? ...

From Chris Titus's "Why I Hate Most Linux Distributions":
+/-
... Linux desktop is not a consistent experience because of the distributions it has. With too much choice, too many under-funded projects, and little quality control ... there will be problems. Users won't understand this foreign system, instructions will be out of date or non-existent, and support will be spotty at best.



Summarized from Frank Karlitschek talk 2019 Linux App Summit (video):
+/-
Desktops (GNOME, KDE) are fine, and were okay 10 or 20 years ago. Base apps (file managers, mail clients, browsers) are good. What is missing is the third-party app ecosystem.

Missing from / problems in Linux: central developer portal, stable APIs, consistent desktop APIs, consistent desktop functionality (e.g. no systray on some DEs), cross-platform toolkits and libraries (e.g. GTK on Windows and Mac), packaging (app release cycle gets tied to distro release cycle; snap/flatpak/appimage are promising but there should be only one; Electron is a symptom of devs avoiding Linux APIs).

Can we somehow merge/coordinate KDE/GNOME/Gtk/Qt a bit ?

Open app stores separate from distros (e.g. Flathub, but even better if federated).

Agree on one packaging format.

Integrate packaging into IDEs. You should be able to push a button in the IDE to package and app and publish it into N stores.

We (the world) need a free and open desktop. We need it for privacy, freedom.



My opinion: What should be the set of base distros ?

+/-
Some distros have unique fundamental features or directions that justify their existence:
  • Void (static linking)
  • Qubes (compartmentalization / hypervisor)
  • Tails (non-persistent, onion)
  • Whonix (dual VMs, onion)
  • Kali (run as root, special network stack ?)
  • LFS, Gentoo (build from source, learn)
I'm sure I'm missing some that are unusual / unique in some way.

Others are pretty fundamental for reasons of organization or company or heritage:
  • Debian
  • Ubuntu
  • Arch
  • Manjaro
  • Red Hat
  • Fedora
  • Slackware
  • SUSE
IMO the rest should feel some pressure to un-fork, to merge back into the base and become install-time or config-time options, maybe just a check-box that gives you a particular DE and set of default apps. Today we have tremendous dilution of brands (an obstacle to potential new users and vendors) and duplication of effort (all of these duplicate web sites and ISOs and installers and repo maintenance and bug-trackers and forked apps etc).

DEs should be separate projects. KDE, GNOME, Cinnamon, Xfce, MATE, etc. Any distro can let the user choose at install-time among supported DEs.

Other features should be available in all distros and let the user choose to enable/disable them at install-time: snap, flatpak, docker, Wayland.

Other features have won their wars and should just become standard across all the major distros: systemd. Rip out the old code to simplify things.

So, for example, instead of having a separate distro Mint Cinnamon, I would move the unique changes of Mint (installer, eCryptfs, more) back into the Ubuntu source tree and bug-tracking system, and have them appear to the user as install-time options. Move the forked changes of Mint apps (Nemo, Pix, etc) back into their original apps source trees and bug-tracking systems, and have them be build-time options. The Cinnamon DE should come from the Cinnamon project.

I'm not talking about forcing or preventing people. I'm talking about persuading the leaders of distros and projects to consider a different emphasis.

One seemingly-stalled effort: Linux Standard Base



Dedoimedo's "The Year of the Linux dissatisfaction"
Dedoimedo's "Expanding Linux desktop market"
Tobias Bernard's "There is no 'Linux' Platform (Part 1)" (and read the comments)
Tobias Bernard's "There is no 'Linux' Platform (Part 2)" (and read the comments)
Steven J. Vaughan-Nichols' "I love the Linux desktop, but that doesn't mean I don't see its problems all too well"
Christian F. K. Schaller's "Getting rid of the need for the usecase Linux distribution"
Chris Titus's "Why I Hate Most Linux Distributions"
Emmanuele Bassi's "Dev v Ops"
Ask Noah podcast, episode 152 "Too Much Choice" (audio; mainly minutes 8-20)
The Linux Cast's "5 Things About Linux That Actually Suck" (video)
The Linux Cast about why not standardize on Calamares installer (video)
DistroTube's "Linux Desktop Kinda Stinks. How Did We Get Here?" (video)
Dmitry Vyukov's "Syzbot and the Tale of Thousand Kernel Bugs" (video) (mainly minutes 7 through 11)
V.R.'s "systemd, 10 years later: a historical and technical retrospective"
Kees Cook's "Linux Kernel Security Done Right"
We have met the enemy, and he is us.



Another facet: Linux has a "hoarding" problem

+/-
We keep adding new stuff without ever getting rid of old stuff. So the junk keeps building up, the overall system keeps getting more complex, the available resources keep getting more and more diluted, more and more duplication of effort.



Penguins

Areas that should be "consolidated" a bit

+/-
  • Distros.

  • DEs.

  • Package formats.

  • Package managers.

  • Default or standard apps (file explorer, text editor, image viewer, music player, etc).

  • Init / event systems (systemd, cron, Network Manager, etc).

  • Security systems (or at least the UI for them).

  • Container systems.

Installing / updating / package management is far too fragmented (see for example DistroWatch's "Package Management Cheatsheet"); we need a couple of standards, and push the various apps and services to use those. Relevant (but 2014): Pid Eins' "Revisiting How We Put Together Linux Systems"

Venam's "A Peek Into The Future Of Distros"



How to do it

+/-
"Some consolidation" is not something one developer can do. We need to change (by persuasion) the culture, the attitudes, of the major devs and managers in the community.

...

Many people don't like to hear this. "HE'S SAYING NO ONE SHOULD DO ANYTHING NEW. HE WANTS TO STOP ME FROM DOING WHAT I WANT. HE WANTS LINUX TO BE LIKE MICROSOFT OR APPLE. HE'S EVIL, BURN HIM !"

Actually, what I'm saying is that the adults, the managers and devs who do the big work and run the major distros and projects, should think about ways to consolidate things a bit. For example, do Ubuntu, Lubuntu, Xubuntu, Kubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Cubuntu, Fluxbuntu, Ubuntu Mini Remix, Linux Lite, Mint and many more (see Ubuntu wiki's "Derivatives") all need to be separate distros, or can they be one with various install and config options ? There would be a benefit to the community, in terms of mindshare and bug-fixing etc, if they could be one. Maybe there are technical reasons they can't be; I'm no expert. And I'm sure there are organizational/political/legal/strategy conflicts that would prevent some of this. But I'm putting forth the idea. Having 400 distros (see GNU/Linux Distributions Timeline) imposes costs and holds back Linux.

If all the *buntu*'s and Mint*'s and Elementary OS and Pop!_OS and Bodhi and Zorin OS and Peppermint OS and more became one distro "Ubuntu+", then when you fix a bug in that one distro, it's fixed in all the combinations. One distro name ("Ubuntu+"). One installer. One set of release images. One repo. One set of tests. One bug-reporting and bug-tracking system. One set of documentation.

Apparently many distros (including Debian and ArcoLinux Xtended and Calam Arch Installer and Endeavour) do something like this ? Near the end of the installer, it gives you a list of available DEs and says "pick one". One ISO and installer for all N configurations.

[From someone on reddit 6/2020:
"openSUSE lets you try different DEs just by logging out. There's only one distro openSUSE and it comes with KDE, GNOME, Xfce, Enlightenment, Mate, LXDE, LXQt, and more."]

From Jesse Smith's "Where are the Fedora-based distributions?":
"... people don't make full, independent projects based on Fedora because people who like working with Fedora create 'spins' or 'labs' which are incorporated in the Fedora infrastructure. If you want to throw together your own spin of Fedora with a different desktop, theme, and tools, then you can create a spin and host it with the Fedora project. Fedora Workstation, Server, Silverblue, CoreOS, and over a dozen spins & labs all live together in the same infrastructure so people don't think of them as separate projects."

Christian F. K. Schaller's "Getting rid of the need for the usecase Linux distribution"

Some of the biggest problems are political. I'm sure one reason that distro Y forked off from distro X was that the Y devs/managers didn't agree with decisions made by the X devs/managers. They argued, split, and a fork happened. Merging back in, or even submitting changes back to upstream, would be very difficult.

...

We need variety and choice, but a reasonable level of it. We never should prevent random person X from creating a new distro. But we need more focus among the majority, the core, of the community.



Suppose other areas of the Linux/GNU ecosystem were more like the kernel and GNU ?

+/-
The Linux kernel, GNU, and util-linux generally work pretty well and don't have a lot of duplicate effort and forks etc.

Why is that ? Because each has a single owner and standard. This does not eliminate all "choice"; the kernel has pluggable drivers and modules. And it does not kill innovation; the kernel gets new features, new CLI commands get added.

So, suppose other areas of the Linux/GNU ecosystem were handled the same way ? Suppose there was an agreement that systemd was the only init system, and there was a clear central owner of systemd ? It has modular plug-ins, you can innovate on top of systemd, you can add units. The major projects all agree to (over time) rip out any old init structures and only use systemd.

Now, someone could refuse to accept this, and use their own non-systemd init system. But over time they would find fewer and fewer apps and devs and base distros supporting that. The costs of being different would get higher. Just as if they forked the Linux kernel and changed it, and based their distro on that forked kernel. Nothing stops them from choosing to be different, but they'll be fighting against the tide.

Similar with package formats. Suppose Red Hat and Canonical and Debian etc were to get together and say "look, let's try to reduce our differences. let's add the best features of rpm/dnf packaging to dpkg/apt, and then we'll all use the enhanced dpkg/apt, and eliminate any support for the old formats and managers".

Each of these changes would take many years. It would not be an overnight change. But with a clear new standard, slowly people/apps would adopt the new standard.



Corporate funding idea (you won't like it)

+/-
Suppose Red Hat was to take a tiny chunk of its billions and say to the Fedora, CentOS, Qubes OS teams: "we will fund you to help port your best features and apps back into base Red Hat, and try to reduce the deltas between our distros. we will allocate some of our devs to help you."

Suppose Canonical was to take a chunk of its millions and say similar to the Mint, Zorin OS, Elementary OS, Whonix, Pop OS teams: "we will fund you to help port your best features and apps back into base Ubuntu, and try to reduce the deltas between our distros, let you become more like 'flavors' of Ubuntu". we will allocate some of our devs to help you."

Suppose Red Hat and Canonical and Debian etc were to get together and say "look, let's try to reduce our differences. let's add the best features of rpm/dnf packaging to dpkg/apt, and then we'll all use the enhanced dpkg/apt". And the effort was staffed by employees of the corps, or funded by the corps.

Google and Microsoft and Apple have tons of money, and some stake in the success of Linux. Any way to tap their funding to implement some consolidation and increased commonality ?



Some say desktop Linux's small market share is not a problem

+/-
Do you say the same about Firefox's small and declining market share ? There seems to be a lot of angst over Chrome/chromium/Google being in a position to dominate internet standards.

If you're okay with a tiny market share, then you forfeit any right to complain about vendors not supporting desktop Linux. MS Office, Adobe, AutoCAD, NVIDIA, game developers, etc.

If you're okay with a tiny market share, then you forfeit any right to criticize companies such as Canonical for scrambling for revenue, by partnering with Microsoft or Amazon or whoever. Like it or not, for-profit companies do much of the development of Linux, and they have to make a profit somehow.

More users of desktop Linux would mean more potential devs, more eyeballs on the source code, fewer bugs and security holes.

Desktop Linux's market share may be poised to get smaller, as Microsoft presses ahead with things such as WSLg (run Linux GUI apps in a VM under Windows) and cross-platform development tools that let a server-Linux developer run Windows as their daily driver. More corps and devs may decide there's no longer a reason to allow/run actual desktop Linux anywhere.

From Matthew Miller, Fedora Project Leader and Distinguished Engineer at Red Hat, on reddit 6/2021:
+/-
> Biggest factors making Linux hard for the general public to accept?

Well, there's no money in a desktop operating system for its own sake. So, it's hard to get the level of investment required to really make it slick, polished, and 100% trouble-free. The general public doesn't really want an operating system, or even a computer. A computer is a horrible nuisance that people put up with in order to get the things a computer can give them: tools for communication and creation.

People looking to make money from a desktop OS need to have some other angle -- either constantly selling you something else, or selling you. The Linux desktop I care about and want isn't going to do either of those things.

I've been saying for years that as more and more consumers who just want a device which gives them those tools without a hassle move to just working on their phones and tablets, the share of Linux among people who actually want a computer will go up, and I think we're definitely seeing that among programmers, engineers, students, and gamers. Will that translate eventually to the general public? Maybe not, but that's okay. World domination isn't the only definition of success.






Secure because Linux



Don't expect perfect security just because you're running Linux. You're still relying on a lot of applications and other software to be well-behaved.



Complexity and bugs

+/-
From /u/ninimben on reddit:
"in 2017 the kernel had 454 CVE's which is more than one a day. in 2018 they had 170 which is in the ballpark of one every two days or so" [But these probably are greatly undercounting the actual serious bugs, maybe by a factor of 10x or more; most bugs are not assigned a CVE number. Not all serious bugs are externally exploitable, of course. But they are bugs.]

From Dmitry Vyukov's "Syzbot and the Tale of Thousand Kernel Bugs" (video) (9/2018):
Conclusions from fuzz-testing on the Linux kernel:
"Every 'looks good and stable' release we produce contains > 20,000 bugs.
No, it is not getting better over time.
No, this is not normal."
and
"The kernel does not have a bug-tracking system per se. There are mailing lists with traffic in them."
[Apparently Kernel.org Bugzilla is not well-used.]
But danvet's "Why GitHub can't host the Linux Kernel Community"
I don't buy "GitHub can't scale big enough to handle the kernel": see Craig Smith's "GitHub Statistics and Facts" Maybe some added discipline or structure would be needed for the kernel to use GitHub, but GitHub itself scales.
Lack of bug-tracking

Follow-on talk: Dmitry Vyukov's "LPC2019 - Reflections on kernel quality, development process and testing" (video) (11/2019)

From Artem S. Tashkinov's "Major Linux Problems on the Desktop, 2020 edition":
Critical bug reports filed against the Linux kernel often get zero attention and may linger for years before being noticed and resolved. Posts to LKML oftentimes get lost if the respective developer is not attentive or is busy with his own life.

From "Re: [CVE-2020-14331]" 7/2020:
... the fbdev, vt, and vgacon kernel subsystems. These subsystems aren't actively maintained (receiving drive-by fixes only), and the kernel developers recommend to not enable these subsystems if you care about security ...

[Those are drivers, which are a type of kernel module. So if you're using them, they should show up in output of "lsmod".]

Counts of "todo" and other words in the kernel source code

As of 1/2020, the kernel had about 28 million lines of code, although most of that is drivers, and code "around" the central kernel. article

A visualization of complexity, not sure it's a problem: Kharacternyk / pacwall



There seem to be serious design flaws or lacks

+/- Linux X-ray
  • A long-standing security issue in the standard Xorg/X.11 display server system used for decades: StackExchange's "Why has Ubuntu 18.04 moved back to insecure Xorg?". TLDR: Nothing stops a Linux GUI application from spying on all the events/keys input to other applications, or even injecting events/keys into the input queues for other applications. Keylogging, essentially.

    From /u/aioeu on reddit 7/2020:
    +/-
    > What was wrong with X that motivated Wayland?
    > I read somewhere that the code base had become unmaintainable.

    I don't think the long-term maintainability of X is really a major reason for Wayland. After all, Xorg is still being maintained.

    But there are other reasons for Wayland:

    • Security in X is largely non-existent. Any client can modify any window on the display, or eavesdrop on events on any window. This is really fundamental in the design of X - it's how window managers work, how programs that let you define hot keys work, and so on. Things like devilspie are only possible because of X's lax security.

    • X is not a good fit with how modern graphics hardware works. A lot of X was designed with the idea that 2D graphical primitives would be hardware-accelerated (stippled lines, oh my!). That may have been the case in the 90s ... not so much now. Modern hardware doesn't even bother accelerating those things.

    • While the supposed network transparency of X is often lauded, it really doesn't work very well over anything with a bit of latency. The X protocol is quite chatty, and a lot of operations require synchronisation between client and server.

    • X has a lot of legacy cruft that can't be removed. The core X protocol isn't even used by most X applications using modern toolkits (they typically use the XRender extension instead), but the core X protocol needs to be there because it's literally "the core X protocol", and there are a few programs that do use it. Heck, my day-to-day text editor uses Motif and still relies on the core X protocol.

    • A lot of hacks on X are really, really bad hacks. I'm amazed that drag and drop ever works at all.

    I should add that there have been a few successes in removing the most egregious parts of X. X has historically had hardware drivers in userspace - it couldn't actually rely on the most useful features of your hardware having kernel APIs, right? - but as you can imagine that's a terrible idea all round. It's the reason X had to be run as root. But a lot of those hardware drivers have been removed now.

    Xorg also once had a print server. After all, if you can render a window to a screen, why not also use the same code to render paper documents? Xprint was only removed after somebody added support for it to glxgears ...
    From /u/Sh4dowCode on reddit 7/2020:
    +/-
    > Wayland and Xorg. Difference ?

    Wayland is a protocol, while Xorg is a display server (using X11 protocol).

    Wayland is the "new" thing and works differently than X11. Back when X11 was created, it was common to have one powerful (time-sharing) server and multiple clients that are connecting to it. Each of thoose clients had a server that was speaking X11, and the server could say "render a line at xy to xy, draw a circle at xy with radius r". Nowadays those X11 primitives are practically not used any more, but they still need to exist because what if some app needs them. (e.g. xclock uses those) Over the time X11 got a lot of extensions, one of them was XRender and allowed pushing a bitmap(-image) over the X11 protocol. This ability is basically what every application uses to render itself, because it gives you a lot more freedom in how you design your stuff. Issue is, that all the bitmaps are going though a socket. And while today the xorg server and x-clients (applications) are on the same machine, it still creates performance and memory overhead. Also any x-client can grab the entire screen, get keyboard events etc, so it's not secure.

    Wayland tries to fix this, by using shared memory. Meaning the wayland client just "sends" a memory location to the server saying where the window "bitmap" is located. Also in Wayland a window is its own thing and it doesn't know what other windows are open.

    Your window manager / desktop env in xorg is just another client to the xorg server, with the same permissions. Now in Wayland the window manager / desktop environment is the Wayland server, so it decides what to render and then just gives the finished screen to render to the Linux kernel.

    To run legacy X11 clients on Wayland there is XWayland which is basically an X-Server that takes the data provided by x-clients, renders it to a bitmap, and then gives them to the Wayland server.


  • Apparently X display system is based on networking, so I can't make Firejail or AppArmor turn off networking for some apps that on the face of it have no need for network access.

    Someone said "Try running X with --nolisten tcp/udp". In Mint, I think I'd add "nolisten tcp/udp" (see "man Xserver") to "/etc/X11/Xsession.options". But I think that would just kill lots of apps.


  • Also no way in Firejail or AppArmor to say "restrict networking to just domain D, and/or localhost, and/or LAN addresses" ? Some apps (such as sudo ?) do a DNS lookup of the local hostname; I want to allow that kind of thing while denying external access.


  • As far as I can tell, networking/VPN/firewall needs a major redesign. Try to figure out why your system is using a particular DNS, or stop it from doing so ( article1, article2 ). Try to figure out if your system ever does a single network access that bypasses the VPN at any time, from boot to shutdown. Try to figure out what happens if your VPN connection goes down, and get informed if your public IP address changes. You'll find yourself lost in a sea of modules and layers, from /etc/nsswitch.conf and /etc/resolv.conf to systemd (systemd-resolved can be in one of four different modes; systemd-networkd may or may not be running) to avahi to BIND to /etc/hosts to dnsmasq to Network Manager and nm-connection-editor and nmcli and VPN and iptables and netfilter and ufw and gufw and docker and more. This thing overrides that thing, this falls back to that if configured this way, three things add rules to iptables as they start up, etc. There even are two forms of GUI for Network Manager in Ubuntu: one through Settings / Network and another through "sudo nm-connection-editor", and they feature-overlap about 95%.


  • There seem to be several areas where old and new architectures, including old and new device names, old and new stacks, old and new utilities, are co-existing or pasted together:
    • Networking/VPN/firewall/DNS/hostsfile/Bluetooth/Wi-Fi (see previous item).
    • Audio (OSS, ALSA, pulseaudio, PipeWire, WirePlumber).
    • Crypto/certificates/authentication/PAM/keyring.
    • Init/systemd/cron.
    • Logging.


  • Repeated security issues in screen-saver screen-locking ?
    Jamie Zawinski's "I told you so, 2021 edition"


  • Installation is done in many different ways by various apps or components.


  • Updating is done in many different ways by various apps or components; see my updating comments.


  • Try running some GUI apps from the CLI instead of the normal way (clicking on icon). After using them and quitting, look at the CLI window. Chances are you will see failed assertions, broken pipes, and other alarming things. On Mint 19.1, at least Firefox, Chromium, ShowFoto, xed (the default text editor), NetBeans IDE, OWASP ZAP do this. I think there's something wrong/changed with Java: every Java app throws "illegal reflective access operation" as it starts. Look in ~/.xsession-errors files, and see various alarming errors.


  • Try running "sudo journalctl -p 0..3 -xb" or "sudo grep -i -E 'error|warn' /var/log/*g | less", to see what happened as your system booted and ran. Probably you will see some alarming error messages, about PAM failures and keyring failures and apps that wouldn't start and files not found and who knows what else. Does not inspire confidence.

    For example, on my (working) Mint 19.3 Cinnamon system, I get things such as:
    +/-
    # On boot:
    kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
    kernel: random: 7 urandom warning(s) missed due to ratelimiting
    kernel: ashmem_linux: module is from the staging directory, the quality is unknown, you have been warned.
    kernel: ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x00000000
    kernel: kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround
    kernel: uvcvideo 1-1.6:1.0: Entity type for entity Extension 5 was not initialized!
    dbus-daemon[1170]: dbus[1170]: Unknown group "power" in message bus configuration file
    dbus-daemon[1170]: dbus[1170]: Unknown username "whoopsie" in message bus configuration file
    networkd-dispatcher[1152]: WARNING: systemd-networkd is not running, output will be incomplete.
    ger[1435]: Error: can't open /lib/modules/5.3.0-24-generic/updates/dkms
    udisksd[1443]: failed to load module mdraid: libbd_mdraid.so.2: ...
    wpa_supplicant[1414]: dbus: wpa_dbus_get_object_properties: failed to get object properties: (none) none
    lightdm[2414]: PAM unable to dlopen(pam_kwallet.so): ...
    dbus-daemon[1170]: [system] Rejected send message, ...
    
    # Startup after kernel is up
    systemd[1]: kerneloops.service: Found left-over process 2589 (kerneloops) ...
    kernel: kauditd_printk_skb: 11 callbacks suppressed
    systemd-udevd[3260]: Could not generate persistent MAC address for docker0: No such file or directory
    vboxdrv.sh[1134]: vboxdrv.sh: failed: Look at /var/log/vbox-setup.log to find out what went wrong.
    lightdm[3873]: gkr-pam: couldn't run gnome-keyring-daemon: No such file or directory
    cinnamon-session[3945]: WARNING: t+1.51996s: Failed to start app: ...
    [pulseaudio] bluez5-util.c: GetManagedObjects() failed ...
    pam_ecryptfs: pam_sm_authenticate: /home/user1 is already mounted
    colord[1478]: failed to get session [pid 5933]: No data available
    colord[1478]: CdMain: failed to emit DeviceAdded: ...
    upowerd[2627]: unhandled action 'bind' on /sys/devices/pci0000 ...
    kernel: ecryptfs_decrypt_page: Error attempting to read lower page; rc = [-4]
    kernel: parport 0x378 (WARNING): CTR: wrote 0x0c, read 0xff
    
    # On shutdown:
    systemd[1]: systemd-coredump.socket: Failed to queue service startup job (Maybe the service file is missing ...
    systemd-udevd[20306]: Process '/usr/sbin/tlp auto' failed with exit code 4.
    systemd[1]: netfilter-persistent.service: Failed with result 'exit-code'.
    umount.ecryptfs[20512]: Failed to find key with sig [d5eaa71c805ac0fb]: Required key not available
    systemd[1]: Failed unmounting /home/user1.
    systemd[1]: Failed unmounting /home.
    kernel: printk: systemd-shutdow: 41 output lines suppressed due to ratelimiting
    

    I'm sure some of these are just tests for features my machine doesn't have, or things that sound alarming but shouldn't be.



  • I noticed that most repository mirrors use HTTP, not HTTPS. I asked if that was a problem, and mostly was told that it isn't because packages are signed.

    I think the "apt 2.0" coming out in early 2020 is supposed for fix some of this ?

    But then there's: Janardhana S's "APT / apt-get Vulnerability (RCE)"

    And from /u/gordonmessmer on reddit 3/2019:
    +/-
    First, give this a glance: James Crennan's "HOWTO: GPG sign and verify deb packages and APT repositories"

    Next, download a few packages more or less at random from various PPAs and mirrors that you use. Extract the contents of those packages and look for a file named "_gpgorigin". If you don't see that file, then the package isn't signed.

    In general, dpkg files aren't signed. apt supports it, but distributions are neglecting that security layer. Instead, the "Release" file is signed, and that file has a hash for the Packages file. The Packages file has hashes for each of the individual packages. That could be almost as good as signing the packages directly, but if you look at the Release file, its hash is only an MD5.

    MD5 has been deprecated for almost every security sensitive application because it's too easy to create a collision, and that's the weak point in the apt security chain. If you can MITM or compromise a mirror, and if you can generate a Packages file that has a matching, then you can replace a package file and apt will believe it's valid.

    Personally, I think that people are not taking that weakness nearly seriously enough, and in this thread you'll see a lot of people asserting incorrectly that packages are signed. They aren't. The prevailing wisdom is a myth.

    HTTPS isn't necessarily the answer. It wouldn't protect you from a compromised mirror. "yum" doesn't use HTTPS generally, either. But Red Hat based distributions (RHEL, CentOS, Fedora) all sign their packages directly.

    If you're concerned about security, I recommend using one of those.

    ...

    Also see:
    packagecloud's "Attacks against GPG signed APT repositories"
    Patrick Uiterwijk's "How Fedora Secures Package Delivery"
    Stack Overflow's "How is the authenticity of Debian packages guaranteed?"
    From /u/murukeshm on reddit 3/2020:
    +/-
    Yes, that's correct, the packages themselves aren't signed.

    But the bit about "Release" file using MD5 is a bit incomplete. Modern repos should have SHA256 hashes ... IIRC This was the cause of a bit of trouble back when this was enabled; apt would warn about insecure hashes and Google's repos were affected. And GPG the signatures themselves use SHA512 in Ubuntu's official repositories. I'd guess the presence of MD5 is probably for backwards compatibility with any older tools.

    From Thomas Leonard's "Qubes-lite With KVM and Wayland":
    "Ever since I started using Linux, its package management has struck me as absurd. On Debian, Fedora, etc, installing a package means letting it put files wherever it likes; which effectively gives the package author root on your system. Not a good base for sandboxing!"

    On Kubuntu 20.10, I added an AllowTLS:True directive to all sections in /etc/apt/apt.conf.d/50apt-file.conf, and apt kept working. Don't know if adding that directive actually changed anything. And that would just affect transport, not signature-checking.

    How to see if a package is signed:
    
    rpm -qpi PKGNAME.rpm | grep -i signature
    dpkg-sig --list PKGNAME.deb
    ar -p PKGNAME.deb _gpgbuilder 2>/dev/null | grep "^Signer"
    

    How to verify the signature of a package:
    
    dpkg-sig --verify PKGNAME.deb
    


  • I could be wrong, but it seems nothing makes a package you're installing tell you what directories it's going to modify, and nothing forces the installer to stay within those boundaries ? A malicious package could alter anything in the system ?
    [To see what files a package would install: "dpkg-query --listfiles PKGNAME" ? Does NOT include effects of any scripts that might be run.]

    Similar questions for a PPA (Personal Package Archive). Suppose you add a PPA for app X to "software sources" for your system, then an attacker cracks the PPA and adds a package to update, say, Cron. In Update Manager, you'd see an update for Cron, and nothing would tell you that the update came from a PPA instead of the official repo, I think.


  • A number of people, on both server and desktop, seem to take pride in staying on old LTS releases, such as Ubuntu 14.04 or 16.04. From a security point of view, this is a bad idea. Yes, Canonical back-ports serious security fixes to those releases. But plenty of fixes do not get back-ported. The whole concept of "LTS" is somewhat flawed.

Farhan's "Linux maintains bugs: The real reason ifconfig on Linux is deprecated"
madaidan's "Fixing the Desktop Linux Security Model"
Madaidan's Insecurities' "Linux (in)security"
Madaidan's Insecurities' "Linux Hardening Guide"
PrivSec's "Linux Insecurities"
Bjorn Pagen's "State of Linux Desktop Security"
Bradley Spengler's "10 Years of Linux Security" (slides) (video)
Post by Solar Designer
Matthew Garrett's "Linux kernel lockdown, integrity, and confidentiality"
Steven Vaughan-Nichols' "Are all Linux vendor kernels insecure?"
Spencer Baugh's "The Unix process API is unreliable and unsafe"

From Pid Eins's "The Strange State of Authenticated Boot and Disk Encryption on Generic Linux Distributions":
+/-
... since code authentication ends at the kernel - and the initrd is not authenticated anymore -, backdooring is trivially easy: an attacker can change the initrd any way they want, without having to fight any kind of protections. And given that FDE unlocking is implemented in the initrd, and it's the initrd that asks for the encryption password, things are just too easy: an attacker could trivially easily insert some code that picks up the FDE password as you type it in and send it wherever they want. ...

... It's particularly sad given that the other popular OSes all address this much better. ChromeOS, Android, Windows and MacOS all have way better built-in protections against attacks like this. ...

From aseipp on Hacker News 1/2023:
+/-
> Is Linux better or worse than MacOS, iOS and Windows at security ?

It's ... complicated. Linux is just the kernel, but good modern OS security requires the kernel, the userspace, and the kernel/userspace boundary to all be hardened a significant amount. This means defense in depth, exploit mitigation, careful security and API boundaries put in place to separate components, etc.

Until pretty recently (~3-4 years) Linux the kernel was actually pretty far behind in most respects versus competitors, including Windows and mac/iOS. I say this as someone who used to write a bunch of exploits as a hobby (mainly for Windows-based systems and windows apps). But there's been a big increase in the amount of mitigations going into the kernel these days. Most of the state-of-the-art stuff was pioneered elsewhere from upstream but Linux does adopt more and more stuff these days.

The userspace story is more of a mixed bag. Like, in reality, mobile platforms are far ahead here because they tend to enforce rigorous sandboxing far beyond the typical access control model in Unix or Windows. This is really important when you're running code under the same user. For example just because you run a browser and SSH as $USER doesn't mean your browser should access your SSH keys! But the unix model isn't very flexible for use-cases like this unless you segregate every application into its own user namespace, which can come with other awkward consequences. In something like iOS for example, when an application needs a file and asks the user to pick one, the operating system will actually open a privileged file-picker with elevated permissions, which can see all files, then only delegate those files the user selects to the app. Otherwise they simply can't see them. So there is a permission model here, and a delegation of permissions, that requires a significant amount of userspace plumbing. Things like Flatpak are improving the situation here (e.g XDG Portal APIs for file pickers, etc.) Userspace on general desktop platforms is moving very, very slowly here.

If you want my honest opinion as someone who did security work and wrote exploits a lot: pretty much all of the modern systems are fundamentally flawed at the design level. They are composed of millions of lines of unsafe code that is incredibly difficult to audit and fix. Linux, the kernel, might actually be the worst offender in this case because while systems like iOS continue to move things out of the kernel (e.g. the iOS Wi-Fi stack is now in userspace as of iOS 16 and the modem is behind an IOMMU) Linux doesn't really seem to be moving in this direction, and it increases in scope and features rapidly, so you need to be careful what you expose. It might actually be that the Linux kernel is possibly the weakest part of Android security these days for those reasons (just my speculation.) I mean you can basically just throw shit at the system call interface and find crashes, this is not a joke. Windows seems to be middle of the pack in this regard, but they do invest a lot in exploit mitigation and security, in no small part due to the notoriety of Windows insecurity in the XP days. Userspace is improving on all systems, in my experience, but it's a shitload of work to introduce new secure APIs and migrate things to use them, etc.

Mobile platforms, both Android and iOS, are in general significantly further ahead here in terms of "What kind of blast radius can some application have if it is compromised", largely because the userspace was co-designed along with the security model. ChromeOS also qualifies IMO. So just pick your poison, and it's probably a step up over the average today. But they still are comprised using the same fundamental building blocks built on lots of unsafe code and dated APIs and assumptions. So there's an upper limit here on what you can do, I think. But we can still do a lot better even today.

If you want something more open in the mobile sector, then probably the only one I would actually trust is probably GrapheneOS since its author (Daniel Micay) actually knows what he's doing when it comes to security mitigation and secure design. The FOSS world has a big problem IMO where people just think "security" means enabling some compiler flags and here's a dump of the source code, when that's barely the starting point -- and outside of some of the most-scrutinized projects in the entire world, I would say FOSS security is often very very bad, and in my experience there's no indication FOSS actually generally improves security outside of those exceptional cases, but people hate hearing it. I suspect Daniel would agree with my assessment that most of the fundamentals today are fatally flawed (including Linux) but, it is what it is.

From /u/GrapheneOS on reddit 2/2023:
+/-
Re: Linux on phone as opposed to endeavours such as Graphene OS ...

GrapheneOS is Linux. The traditional desktop Linux software stack has far worse privacy and security. It's missing most basic parts of the privacy and security model, support for hardware-based security features, broad use of memory-safe languages, modern exploit mitigations, etc. ...

unixsheikh's "The delusions of debian" (lack of resources)

Vivek Haldar's "How Unix Won"

tomaka's "The future of operating systems"

For servers:
Sudhakar Dharmaraju's "GoodBye Linux: the next OS"



Open-source software

+/-
Using tons of software created by many individual people, frequently updated, is an insecure situation. For example, the node.js/npm registry has about 1.9M modules in it as of 4/2022. There's no way all of those are checked and safe. Same for the code in Python's pip system or PyPI, Ruby's system, any big Linux repo, CPAN Perl, and others.
Modulecounts
Firefox line-counts
Chromium line-counts
GNOME line-counts
KDE line-counts
Jarrod Overson's "Exploiting Developer Infrastructure Is Ridiculously Easy (The open-source ecosystem is broken)"
Thomas Claburn's "About half of Python libraries in PyPI have security issues, Finnish boffins claim"

From Liam Proven's "Drowning in code: The ever-growing problem of ever-growing codebases":
Debian 12 ... is 1,341,564,204 lines of code. ...

Nobody can read the source code of Chrome. Not alone, not as a team. Humans don't live long enough. Any group that claims to have gone through the code and de-Googlized it is lying: all that's possible to do is some searches, and try to measure what traffic it emits. A thousand people working for a decade couldn't read the entire thing.

Bugs in open-source software, including that used by Linux or common apps/services on Linux, can go undiscovered for years. For example, The Heartbleed Bug (2 years), GnuTLS hole (2 years), latest sudo vuln (10 years), polkit bug (7 years), another polkit bug (12 years), DirtyCred kernel vuln (8 years).

From Robert Lemos's "Open Source Flaws Take Years to Find But Just a Month to Fix":
"GitHub found ... On average, a vulnerability goes undiscovered for 218 weeks, or more than four years, while it takes just over a month to fix the average vulnerability."

Open-source project that had a deliberate root backdoor for 4 years or so, and no one noticed it in the code: KiwiSDR.

From Daniel Micay (lead dev of GrapheneOS, I think) on reddit 4/2019:
It's just a fallacy that open-source is more secure and privacy-respecting. It's quite often not the case. There's also the mistaken belief that closed-source software is a black box that cannot be inspected / audited, and the massively complex hardware underneath is the real black box. A lot of the underlying microcode / firmware is also a lot higher to inspect.

Note: It's generally considered insecure to run a GUI app as root on Linux. Some GUI apps deliberately refuse to run as root. Wayland will not allow any GUI app to run as root. If open-source software is secure, why would this policy exist ?

From /u/longm0de on reddit 2/2020:
+/-
Many eyes prevents security backdoors and other security exploits right? Or at least gets them fixed faster? Statistically there is no real and significant data that supports open-source or closed-source software being more secure than the other. You can't easily gauge this statistic either since many proprietary software suites incorporate open-source components as well. Closed-source software can also have "many eyes". Thousands to millions of individuals/entities can be looking at the source code of Microsoft Windows through the Shared Source Initiative. Our government certainly takes advantage of that program.
[Also the Windows source code has been leaked multiple times.]
[Also see Apple Open Source]

Ways you could look for security issues in an app's source code:
+/-
Generally from easiest to hardest:
  • Run the app with an app firewall such as OpenSnitch or Portmaster to see what IP addresses it's accessing.
  • Run the app under strace or ltrace, looking for network accesses or suspicious file accesses.
  • Grep the source code for network access library calls, and read those areas.
  • Run some kind of source code scanner on the app's source code. More likely to catch bugs instead of malicious code.
  • Read all of the source code.

From Artem S. Tashkinov's "Major Linux Problems on the Desktop, 2020 edition":
+/-
Year 2014 was the most damning in regard to Linux security: critical remotely-exploitable vulnerabilities were found in many basic Open Source projects, like bash (shellshock), OpenSSL (heartbleed), kernel and others. So much for "everyone can read the code thus it's invulnerable". In the beginning of 2015 a new critical remotely exploitable vulnerability was found, called GHOST.

Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. I'm not implying that Linux is worse than Windows/MacOS proprietary/closed software - I'm just saying that the mantra that open source is more secure by definition because everyone can read the code is apparently totally wrong.

Year 2016 pleased us with several local root Linux kernel vulnerabilities as well as countless other critical vulnerabilities. In 2016 Linux turned out to be significantly more insecure than often-ridiculed and laughed-at Microsoft Windows.

The Linux kernel consistently remains one of the most vulnerable pieces of software in the entire world. In 2017 it had 453 vulnerabilities vs. 268 in the entire Windows 10 OS. No wonder Google intends to replace Linux with its own kernel.
[But: many bugs are not assigned a CVE number, and it's not clear if different OS teams have similar reporting policies.]

Kees Cook's "Security bug lifetime"

From Jonathan Corbet's "An update on the UMN affair" 4/2021:
+/-
... code going into the [Linux] kernel is often not as well reviewed as we like to think. It is comforting to believe that every line of code merged has been carefully vetted by top-quality kernel developers. Some code does indeed receive that kind of review, but not all of it. Consider, for example, the 5.12 development cycle (a relatively small one), which added over 500,000 lines of code to the kernel over a period of ten weeks. The resources required to carefully review 500,000 lines of code would be immense, so many of those lines, unfortunately, received little more than a cursory looking-over before being merged.

From Dorner, Capraro, and Barcomb's "The Limits of Open Source Growth" 8/2020:
We found the number of active open source projects has been shrinking since 2016 and the number of contributors and commits has decreased from a peak in 2013. Open source -- although initially growing at exponential rate -- is not growing anymore. We believe it has reached saturation.

From Slashdot summary of YouTube video:
Plummer also says he agrees with the argument that open source software is more open to security exploits, "simply because, all else equal, it's easy to figure out where the bugs are to exploit in the first place," while proprietary software has professional test organizations hunting for bugs. "I think it's a bit of a fallacy to rely on the 'many eyeballs' approach..."

Ray Woodcock's "Open Source Code: Thousands of Eyes Constantly Improving It?"
Nicolas Frankel's "You're running untrusted code!"

David Heinemeier Hansson's "Open source is neither a community nor a democracy"

Note: "available in a public repo such as GitHub" is not identical to "has a FOSS license".



From blakkheim's "Linux Security Hardening and Other Tweaks":
+/-
A common misconception about the Linux kernel is that it's secure, or that one can go a long time without worrying about kernel security updates. Neither of these are even remotely true. New versions of Linux are released almost every week, often containing security fixes buried among the many other changes. These releases typically don't make explicit mention of the changes having security implications. As a result, many "stable" or "LTS" distributions don't know which commits should be backported to their old kernels, or even that something needs backporting at all. If the problem has a public CVE assigned to it, maybe your distro will pick it up. Maybe not. Even if a CVE exists, at least in the case of Ubuntu and Debian especially, users are often left with kernels full of known holes for months at a time. Arch doesn't play the backporting game, instead opting to provide the newest stable releases shortly after they come out.



From Daniel Micay (lead dev of GrapheneOS, I think) on reddit 4/2019:
+/-
The Linux kernel is a security disaster, but so are the kernels in macOS / iOS and Windows, although they are moving towards changing. For example, iOS moved a lot of the network stack to userspace, among other things.

The userspace Linux desktop software stack is far worse relative to the others. Security and privacy are such low priorities. It's really a complete joke and it's hard to even choose where to start in terms of explaining how bad it is. There's almost a complete disregard for sandboxing / privilege separation / permission models, exploit mitigations, memory-safe languages (lots of cultural obsession with using memory-unsafe C everywhere), etc. and there isn't even much effort put into finding and fixing the bugs. Look at something like Debian where software versions are totally frozen and only a tiny subset of security fixes receiving CVEs are backported, the deployment of even the legacy exploit mitigations from 2 decades ago is terrible and work on systems-integration-level security features like verified boot, full system MAC policies, etc. is near non-existent. That's what passes as secure though when it's the opposite. When people tell you that Debian is secure, it's like someone trying to claim that Windows XP with partial security updates (via their extended support) would be secure. It's just not based in any kind of reality with any actual reasoning / thought behind it.

The traditional desktop OS approach to disk encryption is also awful since it's totally opposed to keeping data at rest. I recommend looking at the approach on iOS which Android has mostly adopted at this point. In addition to all the hardware support, the OS needs to go out of the way to support fine-grained encryption where lots of data can be kept at rest when locked. Android also provides per-profile encryption keys, but has catching-up to do in terms of making it easier to keep data at rest when locked. ... iOS makes it easier by letting you just mark files as being in one of 2 encryption classes that can become at rest when locked. It even has a way to use asymmetric encryption to append to files when locked, without being able to read them.

Really, people just like saying that their preferred software stack is secure, or that open-source software is secure, when in reality it's not the case. Desktop Linux is falling further and further behind in nearly all of these areas. The work to try catching-up such as Flatpak is extremely flawed and is a failure from day 1 by not actually aiming to achieve meaningful goals with a proper threat model. There's little attempt to learn from other platforms doing much better and to adopt their privacy and security features to catch up. It's a decade behind at this point, and falling further behind.

Also, all these things about desktop Linux completely apply to anything else using the software stack. It doesn't matter if it's FreeBSD or whatever. FreeBSD also has a less secure kernel, malloc, etc. but at least it doesn't have nonsense like systemd greatly expanding attack surface written with tons of poorly written C code.

...

There are literally hundreds of serious, game-over vulnerabilities being fixed every month in the Linux kernel. There are so many vulnerabilities that vulnerability tracking and patching doesn't scale to it at all. It has no internal security boundaries. It's equivalent to running the entirety of userspace in a single process running as full unconstrained root, written entirely in C and assembly code rather than preferring memory-safe / type-safe languages. Watch this talk as a starting point: Dmitry Vyukov's "Syzbot and the Tale of Thousand Kernel Bugs" (video)

...

> you've said Flatpak is flawed. is Snap any better as an app sandbox?

No, not really. They're both fundamentally flawed and poorly implemented. They're a lot worse than even the very early Android sandbox from a decade ago before all of the work on hardening it and improving the permission model. They're approaching it completely wrong and treating it as if they need to figure out how to do things properly themselves, by not learning from existing app sandboxes.

... It's a fundamentally broken approach to implementing a sandbox. It doesn't draw an actual security boundary and fully trusts the applications. The design choices are being made based on the path of least resistance rather than actually trying to build a proper security model. There's a big difference between opportunistic attack surface reduction like this and an application sandbox, which these are not implementing. They cannot even be used to properly sandbox an application no matter how the application chooses to configure the security policies, even if the app is fully trustworthy and trying to do it. The implementation is not that complete. It could certainly be done properly but it would require a huge amount of work across the OS as a whole treating it as a unified project, along with a massive overhaul of the application ecosystem. I can't see it happening. It requires throwing out the traditional distribution model and moving to a well-defined base OS with everything outside of that being contained in well-defined application sandboxes with a permission model supporting requesting more access dynamically, or having the user select data as needed without granting overly broad forms of persistent access.



From /u/longm0de on reddit 2/2020:
+/-
[In the context of "why do people go back to Windows"]

I feel security is a massive burden put upon Linux developers. Linux was not made to be "the most secure" system in the world or even secure at all. Linux was made with portability in mind and with portability there can be conflicts with security mechanisms.

Take an actual look at the Linux kernel, for a long time it lacked security that Windows NT had since its release. It's important to know that the NT lineage of Windows is not based off of or even similar to MS DOS or OS/2-like Windows 95 and etc. The NT lineage of Windows is initially based off of VAX/VMS (now known as OpenVMS) and still largely is based off of that architecture as developed by Dave Cutler and his team. Windows NT from the get-go had users, roles, and groups as well as proper access control. NT contains discretionary access control lists as well as system access control lists which can be used for auditing in comparison to Linux which relied on rudimentary RWX permissions with an owner-group-world philosophy. SELinux finally brought discretionary access control lists to Linux as well as mandatory access control. SELinux is a great thing and should be treated as such - it implements a form of MLS. Windows later on added a form of MLS to known as mandatory integrity control. Nearly all objects in NT are securable with DACLs and auditing such as processes, threads, sockets, pipes, mutexes, etc., NT has an underlying unifying security principle. In later versions of Windows (such as Vista) UAC was implemented with the Administrator account so that even administrators didn't execute things as administrators, but had to explicitly grant permissions. It's stated that UAC is insecure because in normal implementations, it is just a "yes" or "no". This is largely untrue, UAC in its current default configuration is ran in Secure Desktop Mode which prevents software input emulation as well as keylogging. In Linux, if I want to run a program elevated, I have to use the terminal and on X11, I can just intercept the key events and then log the users password without any high privileges. Where is the security in that? Windows has exploit mitigation policies which are VERY similar to hardenedBSD and grsecurity/PaX. Many Linux distributions don't even want to use grsecurity/PaX and the kernel developers don't even want to support it because it may "break" some devices.

Again, Linux was made for portability, not security. It's not exactly "insecure", but it's not exactly secure either. Also, I don't run any anti-malware on Windows (for resource purposes I even disabled Windows Defender by setting -X on core files it requires), and my computer hasn't received any malware, and years back the only time my PC did receive malware was due to being socially engineered. There is nothing about Linux that magically prevents malware - nothing about its architecture as compared to Windows accomplishes this. When somebody can make an actual case about its architecture - I will change my mind. No, don't point out access control that Windows already has. Windows on the other hand has driver signature enforcement, kernel patch protection, AppContainers, etc. You can even configure Windows so that the only applications to run with administrative privileges have to be digitally signed. There is a lot you can do in terms of security on Windows systems.



Not going to happen:

RIIR: Rewrite Linux using Rust programming language

+/-
An important point made by some people: we really should stop using the C programming language (created in 1972-3). It is not memory-safe and type-safe, doesn't have the concepts of exceptions (it always just does something and keeps going), heap, strings. Unfortunately, the Linux kernel and much of the user-space code is written in it. This leads to tens of thousands of bugs in Linux today, including security vulnerabilities. Maybe C is appropriate for very low-level system programming, as an alternative to assembly language. But for apps and services and modules, not.

What is better ? Probably Rust.

Dominus Carnufex's "Rewrite the Linux kernel in Rust?"
LWN thread
Quora thread
Serdar Yegulalp's "4 projects ripe for a Rust rewrite"
tsgates / rust.ko
Joel Spolsky's "Things You Should Never Do, Part I"
Gavin D. Howard's "Rust, Zig, and the Futility of 'Replacing' C"
Paul E. McKenney's "So You Want to Rust the Linux Kernel?"

This would not help/solve issues such as all of the kernel code operating in one memory address space at one processor privilege level (lack of compartmentalization). A bug in device driver X still could mangle something in iptables code Y, for example.

But it should help get rid of entire classes of errors such as buffer overflows and use-after-free.

People bringing up this idea have provoked a "so you go do it" reaction. "RIIR: You're telling existing devs to go do a ton of work." A fair point. Except that the work would be pointless if at the end the existing devs reject the new code. And indications are that devs all the way up to Linus Torvalds would reject it.

A rewrite would solve some classes of low-level problems, not fix bigger problems, be an ENORMOUS amount of work, and be resisted by the existing devs. Not going to happen.



Now Linux desktop users are using the same browsers etc as the Windows people are, so vulnerabilities seen on Windows are more likely to exist on Linux too. Same with PDF docs and Office macroes. And with cross-platform apps such as those running on Electron or Docker. And libraries (such as the SSL library) used on many/all platforms. An exploit may work the same way regardless of the underlying OS type.



Your own actions can seriously compromise the security of your system:
From Easy Linux tips project's "Avoid 10 fatal mistakes in Linux Mint":
+/-
Software from third-party repositories (like PPA's) and external .deb installers, is untested and unverified. Therefore it may damage the stability, the reliability and even the security of your system. It might even contain malware ...

Furthermore, you make yourself dependent on the owner of the external repository, often only one person, who isn't being checked at all. By adding a PPA to your sources list, you give the owner of that PPA in principle full power over your system!





Some cautionary experiences with Linux



/u/sng_shivang's "Why I came back to Windows from Linux?"
Christopher Shaw's "My Adventure Migrating Back To Windows"
Kev Quirk's "A Sombre Goodbye To Linux"
Christer van der Meeren's "Linux - I'm giving up again"
Bozhidar Batsov's "Back to Linux"

Gamers may have a tougher time with Linux than with Windows. Vendors target the biggest market first and best.

Same for video-editors and such.

Open-source software may be great, or may have one guy working on it occasionally and be really hit-or-miss.



From someone on reddit 4/2018:
+/-
> I'm sorry it's a bit of a rant and I might sound like
> a noob to you all, I'm really disappointed and not in a
> good mood at the moment. I've been using Linux only for at
> least 6 months and I've been in love with it when I decided
> to make the switch for good ... and I'm beginning to think
> it sucks. Tonight I had to do a simple slide-show for a client
> and I used mostly Shotcut but I tried Openshot and Kdenlive
> and the three of them was horribly buggy and a nightmare ...
> I really didn't enjoyed my experience and it pissed me off.
> I do not understand, most of that software has been in
> development for years and they look like in beta phases or as
> if only one person worked on it, but there's a big community
> and I keep seeing donations for open source I don't think money
> is really an issue. As for bugs I didn't even try to break them,
> they struggled with tasks such as fade in and fade out, transitions,
> adding texts, very basic stuff, I had to restart Shotcut like
> 4 times because it couldn't add the pictures on the timeline,
> and it's a well-known bug that is from around 2016. I'm on
> Ubuntu Mate and everyone says Ubuntu is a stable distro for gaming
> and doing work so I installed it. The only softwares that are
> stable to me is Blender, Krita, Gimp, Inkscape and Godot. As for
> Gimp it really is very powerful but there are some tools that are
> missing that Photoshop has, and if I go for the latest version
> it's very slow and not usable. I do a lot of multimedia and I
> don't think I will survive ... There's Natron, Fusion 9 that I
> didn't used yet but they are compositing softwares, I don't think
> I can do a lot with them as for video editing. It's already hard
> to not be able to play recent video games, if it also removes tools
> for working and being creative there's just no point to stay or to
> suggest it to anyone.

I think you have a wrong picture here. Most open-source projects indeed have only one (or very few) developers working on them, and get very few (if any) donations.

Ubuntu is mostly stable in the sense of "let's not change it after the release". That's great to avoid introducing new bugs, but not so great to remove old bugs.



From /u/BlueGoliath on reddit 9/2017:
+/-
As someone who previously used Windows and now uses Linux for 90% of my time now: If you are going to switch to Linux, be ready to deal with bugs, piss-poor UI design, hardware incompatibilities, and other issues.

Despite what you hear on tech sites about how great the Linux community is, it really isn't. If you complain about Linux you are most likely going to be met with one of the following:
  • You just don't like it because it isn't Windows.
  • It isn't Linux's fault, it's your computers fault.
  • Distro X just sucks, Distro Y is what you should be using.
  • You shouldn't complain because it's free.
Yeah it sucks but that's the current mentality of the Linux community. Be ready for it.
[Re: "piss-poor UI design": probably not a problem if you spend most of your time in a browser, desktop, and a couple of major applications.]



From /u/OnlyScar on reddit 3/2018:
+/-
Around 6 months ago, I made the move to Linux. I am not a gamer, so it was easy for me. To make the experience more authentic, I installed linux on my main machine and didn't dual boot Windows. It was only linux for me. It has been an interesting journey, but sorry I can't take it anymore. Please note that I am strictly speaking as a non-developer, non-geek but a "power user". My reasons might not apply for developers and very technical users. Below are the reasons am going back:

1) Windows vs Package Manager Repo System : Repeatedly I was told that the software repository and package manager system of linux is much superior than the Windows system of downloading .exes from developer sites. This is such a lie that it's not even funny. The reason: age of software. Win32 .exe softwares get updates independently from the base OS. You can use Windows 7 and guess what, your favorite softwares will all run at the LATEST version. I repeat, you can use Windows 7 and your Blender and Krita will be at the latest version. What the version of Blender and Krita on Ubuntu 16.04 or 14.04? Is Ubuntu 14.04 even usable for normal desktop use anymore, consider its software repo age? And no, am not using any rolling distro or Fedora because their stability doesn't hold a candle in front of Ubuntu, mint, debian stable, win 10 or macOS. Also I shouldn't have to upgrade my OS just to get the next version of software. This is absolutely unacceptable and ridiculous. The fact that my softwares stays fully cutting edge, up to date on Windows while the base OS stays same is extremely important.

2) Security and BSODs etc : Contrary to FUD, Windows 10 is actually very secure unless you want to download softwares from crackedfreesoftwares.ru. You DO NOT need a separate antivirus, Windows Defender is now enough. It runs like a dream on most hardware. And Windows do NOT force upgrades in the middle of work. BSODs have long been a thing of distant past. Basically am saying that repeatedly using the boogeyman of security, bsods etc isn't working.

3) Atrocious Desktop Environments : My main reason of ditching linux. Linux DEs are such a sad joke compared to Windows (or Mac) DE that it is not even funny. Let's start, shall we:

i: GNOME: The DE suffers from MEMORY LEAK for god's sake. Performance is pathetic, much much worse than Windows 10 or mac DE. This is also the main default desktop of linux world, which actually says a lot about linux. It's absolutely unthinkable for us to even use a DE which suffers from extreme memory leak, and developers doesn't even shows any intention of fixing it. It is just unthinkable on Windows. GNOME is also unusable out of the box, and you have to use random 3rd party hack job extensions just to get a basic fully functional DE. You need to download a software to get simple minimise button. Simply Unbelievable. And you guys, like a bunch of callous users, continue to support it and use it while happily doing ALT+F2 -> r. Lame.

ii: KDE - So, so many small random but crucial bugs that it is really impossible to list them all. They try to emulate Windows, and does a pretty poor job. For example, just use the "hover option" on KDE task bar. See the quality of preview. Does KDE devs even know how important that single function is? Small random bugs like this simply makes it inferior to Windows DE.

iii: Xfce - Thanks, but no thanks. Its 2018, not 1998. No hover option btw. Too basic and limited.

iv: Cinnamon - Too strongly tied to Linux Mint, a distro indulging in many questionable practises. Bad aesthetics. What up with that huge square-like menu? And why does the menu size increases when I add favorites?? It's already too big anyway. It just looks like a cheap rip-off of Windows XP.

v: Mate - Still too basic compared to Windows.

vi: Tiling windows managers - Unusable and irrelevant for non-developers, non-geeks.

Anyway, for me default DE matters. Even if the perfect DE exists somewhere in the wild, if a distribution chooses a subpar DE, it says a lot about them and their focus on user-friendliness. And since most of the linux world has enthusiastically opted for GNOME 3, a pathetic subpar incomplete DE, it says a lot about you guys.

4) Sickening Hypocrisy of the Community : Let's start, shall we - i: Saw multiple caustic rants about how MS Windows 10 provides a poor inconsistent UI because of 2 settings menu (legacy and metro). And you guys say this while primarily using a piece of jewel like GNOME 3. /s ii: Linux is all about control. Just don't expect a fcking minimise button by default on popular DEs such as GNOME and Pantheon. OK got it. iii: The arrogance and know-it-all attitude of GNOME devs and Elementary OS devs will put the arrogance of MS and Apple to shame. But i guess that's okay cause they are your own. iv: Continuously compare Windows from 2002 to Linux from 2017 and try to prove your point about how linux desktop is superior. Continuously attack MS for telemetry and control and while happily using Google services and FB. Giving Apple a pass cause they are unix. The list goes on and on ...

5) Last but not the least, atrocious softwares - Yeah guys, accept it, LibreOffice and GIMP sucks balls compared to MS Office and Photoshop. Krita gives MS softwares a run for their money, but LibreOffice and GIMP are simply cringy embarrassments. You will get fired if you dare to make a presentation with LibreOffice Impress in a corporate environment. It is so bad. VLC Media Player is out right bad compared to Pot Player on Windows. Nothing on linux compared to MusicBee on Windows. I won't even embarrass you guys by talking about JRiver Media Center. Most linux desktop softwares simply lacks the features, polish and finesse compared to their Windows counterpart.

And no, it is not MS or Adobe's fault that those softwares are not available on Linux. You guys continuously rant about evil proprietary software. Upstream major distros like Debian and Fedora doesn't even include proprietary softwares in their main repo. Then why should proprietary software companies release their softwares on linux? What sort of a weird entitled demand is that? Why should proprietary software companies accept second-class treatment on linux and hear some caustic remarks from GNOME devs and Debian greybeards? It was up to you guys to provide a real 1:1 alternative to MS Office, Photoshop and various other proprietary softwares, and you guys failed.

And yes, hardware support and display quality is much better on Windows. The fault again lies with Linux. If you treat proprietary drivers and firmware as second-class citizens, don't expect hardware developers to go out of their way to support Linux. That's an unfair demand.

Bye. After experiencing Linux, my respect for Microsoft and Windows 10 has increased by a 1000 times.

IMPORTANT EDIT - REASON FOR WRITING THIS POST - This problems have bugged me since the beginning. But I came to linux at a tumultuous time, when Ubuntu has abandoned Unity (so Ubuntu Unity 16.04 is a dead horse), and Ubuntu 17.04 and 17.10 are only interim releases. So I cut linux desktop and Canonical some slack and waited for the next LTS. Today I tried Ubuntu 18.04 Beta and guess what? Lo and behold, the glorious memory leak is still present. And my head just exploded in rage. :/ So much effort, so much time spent tweaking, so much distro hopping, so much anticipation to permanently shift was all for naught. That's why I made this salty post.
From /u/UncleSneakyFingers on reddit 3/2018:
+/-
I have the same experience as you. This is my first comment on this sub, but a lot of users here are living in their own universe. I see so many posts on the various Linux subs describing issues that are simply unthinkable. Windows just works, Linux just breaks. I still try learning Linux though just to increase my skill set. But going from win10 to Linux is like going from a Mercedes to one of those old cars you have to hand-crank to start up. It's just ridiculous.

So many users here are willing to spend an entire weekend fixing an issues with their Linux setup, but give up on Windows the first time they f*ck up something basic and get an error message. This sub has really turned me off from Linux in general. When they talk about Windows, it's like one of those infomercials showing someone trying to crack an egg and having it explode all over the place. Just ridiculous exaggerations with no bearing of reality.
From /u/tonedeath on reddit 3/2018:
+/-
... The most important point that he made (in my opinion) is that if you install a distro like Ubuntu 16.04.x LTS (a distro that is supposedly designed for non-techies, non-geeks, non-developers, you know regular computer users), a lot of the software in the repos is not the latest versions of things. If you want to run the latest versions, you probably end up Google-ing and finding out how to add PPAs. This is not hard but, it takes more effort and learning than downloading installers on Windows or Mac and then getting update notifications. Why should a user of any current version of a desktop distro not at least be offered to be updated to the latest version of apps? It's a valid criticism and it should be listened to and addressed. ...
From /u/knvngy on reddit 3/2018:
+/-
The GNOME thing is an embarrassing. Looks like amateurish I don't understand what's going there in the GNOME HQ.

But truth be told: Linux has never been really polished, optimized and focused for desktop. The focus on Linux has been: servers, IT networks and now embedded/mobile, where the money is. In the desktop department Linux is OKish it can be used just fine, but I would agree that macOs and even Windows are better in that department.



From /u/ThePenultimateOne on reddit 3/2018:
+/-
Bluetooth audio is a pretty messy scene on Linux. For a long time I couldn't get any headset to work consistently on Kubuntu. You would have to go through this painful connect-disable-disconnect-connect-enable loop every single time.

Now I have things working on Fedora ... except for my laptop, which now consistently gets very out of sync. It didn't do this a month ago. It didn't do this on a previous version of Fedora. The whole thing sucks.



From /u/AlejandroAlameda on reddit 3/2018:
+/-
Once every few years, I try to give Desktop Linux another chance just for the kick. Here's my recent experience with Linux Mint 18.3. Enjoy :)
  • In a test VM with Linux Mint as a guest, VirtualBox guest additions can't be installed (some strange compilation error).
Installing Mint on real hardware then went quite smoothly, but:
  • USB Wi-Fi interface won't be found on boot, only after plugging out and back in. Need to manually add modules to init scripts. Not what granny expects from a desktop system.

  • Installing Chromium is quite flaky: Clicking on "Install" in the Software Manager doesn't seem to do anything (nothing happens) -- after multiple attempts, it somehow magically appears in the Start menu.

  • Software installation through the Software Manager is hit and miss in general.

  • Suddenly, I get a "Busy spinner" as a mouse cursor all the time, everywhere, forever.

  • Chromium: Switching themes gives huge graphical glitches, a mixture of all previously selected themes is used for various slices of widgets.

  • Chromium: All taskbar buttons show the default Chromium icon, not the one belonging to the Chrome app.

  • Chromium: Each taskbar button has a strange vertical line before the window's title.

  • File Manager: Situations can arise easily where the File Manager recursively tries to copy a folder into itself, yielding an infinite "Preparing to copy: 4298742398743298423789234789234 files (42723484329742389423 GB)" dialog.

  • VirtualBox installation (as host): Entire computer simply freezes (last seen outside of Linux in Windows 95) when launching a Virtual Machine.

  • Installing current NVIDIA graphics drivers is impossible except if you're at least 3 rocket scientists.

  • Even if you manage to install them, nvidia-settings forgets its settings on each reboot (yes yes, I know you can put them in a "Startup script" with special voodoo command line options, but Granny doesn't want to do that).

  • Mounted samba shares simply stop working after an update ("Input/Output error"). 2 hours of Googling and trial and error reveals that the default protocol version simply changed from one version to the next and there's no mention about that, no useful error message, and no fallback, anywhere.

  • Desktop compositing is much, much slower and laggier than on Windows with exactly the same machine, graphics card, and official NVIDIA drivers (verified to be working and in use). I mean, REALLY slow. Like 10 FPS. Dragged windows lag visibly behind mouse cursor.

  • OpenGL is extremely slow. 12 FPS on Linux, 20 FPS on Windows, exactly same machine and test (WebGL Aquarium, browser doesn't matter).

  • Lots of obscure character set problems when mounting network shares, too many details to mention.

  • Some apps don't "see" network shares mounted in certain ways. For example, FreeFileSync simply doesn't list SMB shares mounted via the "Files" app, which makes it unusable except if you have mount -t cifs and fstab voodoo (which aunt mary doesn't have).

From /u/MaxPayneNoir on reddit 3/2018:
+/-
And this is exactly why Linux desktop share is still ~2-3% (and not because it doesn't come preinstalled on laptops, as Torvalds instead assessed: ChromeOS is an already popular Linux only because it "just works").

Not that Linux doesn't work, it works perfectly (significantly less troublesome than Windows and macOS, efficient, lightweight, secure, performing, versatile, free, portable, privacy-keeping and well documented), but you need to learn how to use it. And relying on GUI stuff only is not the right way of using it. Linux is CLI. You may use Graphical apps all the day long, and that's perfectly fine, but system administration, configuration, maintenance, and troubleshooting requires you to type commands in a terminal or on a virtual console. And most people don't like the idea (or are too afraid) of getting their hands dirty on terminals.

Here lays the explanation for the fact that all the people I know who attempted Linux (even ~ 10 engineering, physics, IT, computer science students forced to install it by University) but a single guy, dropped it after a while.

However if you bear it for the first 6 months you'll get accustomed to it, start appreciating it better and see reality for what it is, and probably never look back.



From /u/theth1rdchild on reddit 4/2018:
+/-
Hey everyone,

I've been using Windows since I was 4 in 1993. We had a Windows 3.1 box. I've worked in IT for a decade and I still do, but I have next to zero Linux experience.

How ... how does anyone do this? I tried to install Ubuntu server 16.04 raid 1 and every single step from partitioning on required googling and a restart of the entire process. I tried for eight hours just to get a bootable system on raid 1 and things just kept going wrong. Half the information I was looking up contradicted itself, documentation is incomplete and advice is anecdotal and missing important information. Screw it, I thought. I'll install desktop and get used to it before doing crazy stuff. Raid 1 was kind of a nice but not necessary thing. Surely a regular desktop install will allow me to learn and I can try again in a few months.

But holy sh*t, every single thing I want to do that would be as simple as "Google thing I want, Grab newest version from their website, Install or launch the exe" in Windows is a tedious stress-inducing headache in Linux.

As example: Google for a program to show sensor output like temperatures. Open hardware monitor looks cool. Oh it has dependencies. I don't know what mono is. Will it take up a lot of space or break anything else? Sh*t, I don't know. Oh, this forum post has another person trying to learn Linux and he wanted to use this program. Everyone is being rude to him. Oh, Linux can't interface with open hardware monitor very well. Why the f*ck was it the first answer on Google? There's no hardware sensor app like hwinfo for Linux? Okay, I'll search the Ubuntu apps for a temp sensor at least. There's only one. The only notes say that it needs something assigned in terminal to work. Why the f*ck doesn't the installer do that? Oh well, now I typed what it said to in terminal and it didn't take. I don't understand why. Oh, the official page on the app is misspelled for this command and I copied it directly. Okay, FINALLY I have a temperature sensor. And it doesn't display anything beyond the current core temp. Great.

As opposed to: "Google temp sensor. Find speedfan or hwinfo. Install. It runs."

Is the problem me? Is my windows brain just too stuck in a rut to understand why all this tedious BS is necessary?

I think at the least I need a decent explanation of why these are so different so I can maybe understand and work within my limitations better. Any guides I've followed are very straightforward "do ___ then do ___" so I haven't really learned anything about why Linux is the way it is, which seems necessary to functioning in it.

Thanks to anyone who read all that and can help.



From /u/zincpl on reddit 4/2018:
+/-
I just had to set up Linux on my new machine for work, took 4 different versions before it would actually install then started booting to a blank screen when I installed the software I needed, took me 2 days of non-stop frustration but now I can finally do something productive.

Basically IMO Linux shouldn't be compared with Windows or Mac, it's made by engineers for engineers, it's not designed to be user-friendly, rather it's designed to give power to the user and assumes the user knows what they're doing.

It really sucks that there isn't really anything between over-priced and underpowered macs with *nix power and free-but-held-together-with-duct-tape linux.



From someone on reddit 5/2018:
+/- So I stopped using [pirated] Windows a year ago since it was problematic. Buying is not an option. So I switched to Linux since it was free, open source, and I am a Science student so I thought it would be pretty useful. A year have passed and I am still a noob (was very busy with my exams already, learning Linux would have been a burden). I have a Dell Inspiron Laptop with Intel HD Graphics 5500, 4 GB RAM and 1TB Hard Disk. I have been switching distros and these are the experiences so far:
  1. Ubuntu 16.04 - Was good but it was a little slow. Plus it wouldn't detect my headphone half of the time.

  2. Elementary OS - Was extremely slow. Took 30 minutes just to boot to login screen.

  3. Return to Ubuntu 16.04

  4. Switching to Ubuntu 16.04 Budgie Remix - Was good. Better than the default Unity both in looks and performance.

  5. Ubuntu 16.04 Xubuntu - Thought this would be lightweight, so installed it. The performance was OK and the look was really bad.

  6. Ubuntu 17.10 - tried to install. My laptop crashed. Couldn't even get past booting screen.

  7. Switch to Ubuntu 16.04 - Performance became slower day by day.

  8. Ubuntu 16.04 Lubuntu - thought that my laptop is low spec, so why not switch to the lightest distro? Well, surprise, Lubuntu encountered issues. The screen flickered often, especially when coming out of suspend.

  9. Finally, now I am in Linux Mint 18.3 Sylvia - The performance is OKish, lags sometimes, hangs out of nowhere.

I will not talk about gaming experience, but in short it is awful.

So, those of you who are new to Linux, this is my message: be cautious before installing Linux and understand Linux very carefully. Linux, as an interface for personal use, is terrible.
Responses:
+/- Some advice: Slow down on switching distros, and find out where your performance bottleneck is by looking at your system usage. It could be the drivers you're using, or applications that aren't properly optimized to run on your OS. Dell offers some Linux driver support; look into that and see if you can replace some of the generic ones with Dell's suggestions.

...

Sounds like some poor configuration or hardware interaction (5400 rpm disk?)

...

The slowdowns and hangs are probably something to do with the disk. At a guess is it made by Seagate? They just love to stall for ages.

The other obvious hang is after doing a large disk write then flushing it to disk. There is a few turntables for this. I wish the distro's would fix these by default. Which is to limit the dirty cache relative to the performance of the disk.

...

Your problems are originating from "Intel HD Graphics 5500"



From people on reddit 6/2018:
+/-
Re: Windows vs Linux:

Over the course of the past ten years, I have tried Ubuntu on three separate occasions, on three separate laptops. Each time, I ended up going back to Windows because I couldn't get Wi-Fi to work.

...

Linux is great if you're a dev. I've found that it hits hiccups any time you are trying to do something a bit more consumer oriented, and have to interact with the world of Windows and macOS systems, as well as proprietary software.

Linux was also so customizable, and you could set up some pretty impressive desktop environments, however if something went sideways it would be quite a bit of work to get it sorted. ...

...

Windows just works.

Linux is buggy and unstable, regardless of what people say (I'd rather use macOS over Linux everyday).

I've tried Linux multiple times and distros and never takes more than a day to find a major bug on the system or a problem with software.

...

It depends heavily on the hardware, just like Windows. It's also heavily distro and version specific. I haven't been able to get Fedora to boot from USB without failing in 10 years, but Ubuntu runs every time.

Laptops are another issue ... if you want a Linux laptop, you're best off buying one from System76, Pine64, or Dell/HP with Ubuntu pre-installed. Wireless support has always been iffy if you try to install it on a laptop that was designed for Windows.

...

As someone who uses Ubuntu quite often - The Non-LTS releases are effectively betas. The newer, bleeding-edge ones are there for those who want them, but you're a lot more likely to find bugs outside of the LTS release.

...

[Currently with LTS] the Ubuntu Store doesn't even work, as a major bug but was reported on their channels.

...

I used to use Linux as my main OS. What happened is that I found I was needing to go into Windows more and more because of the lack of support of programs and hardware I needed for working which became a much more present issue in my life as I got older and spent less time casual computing. There were a lot of alternative software options for Linux but I found most of them to be unpolished and buggy. If you're okay and enjoy the whole troubleshooting aspect, then Linux might be right up your alley. I got to a point where I just wanted everything to work though and spend less time trying to make it work myself.



From someone on reddit 6/2018:
+/-
Linux can often break more frequently than Windows - no one likes to hear this and I'm sure people will say the problem is the user etc etc.

esp for rolling release distros, or point release when you do e.g. dist-upgrade, and other times with just regular updates, things can break.

With Linux it then becomes a cycle of 'hope you can find the answer on google, try it in terminal, see if its fixed, try something else' unless you are an expert. This is because of package dependencies in Linux, if you break one others break too. Often you can need to compile from src etc.

Windows has its own version of dll hell, but each program gets its own dependencies managed via WinSxs so you can't get global breakage due to a package. People will tell you that Windows Updates can cause problems but that's really rare - they can be slow though.

You get all the benefits of open source, choice, no ads etc but lets dispel a myth - Linux isn't any more performant or stabler than Windows 10. Windows is rock solid stable, supports every hw ever made and is very fast. It also has better battery life (I've tried both powertop and tlp).



From someone on reddit 11/2018:
+/-
I love Ubuntu, but have no more time to resolve the endless bugs it creates.

I adore Linux (Lubuntu is my current distro of choice) and have been using it for more than ten years. It has taught me a ton about how computers work and even created some professional opportunities for me writing about tech.

But as an increasingly busy small-business owner, I no longer have an hour a day to spare sifting through the endless amount of bugs that the OS throws up and am reluctantly about to switch to Windows. I love customization, but at this point in my life I also need something that just works and doesn't impair my productivity.

This week alone:
  • Oracle VirtualBox has become basically unusable for me. I launch it and just see a black box basically. Some weird theme-related bug that even the good folks over on AskUbuntu have been unable to help me resolve. I don't trust VMWare not to randomly break down again, as it has done in the past, so like keeping a VM on VirtualBox as a backup. Now I've zero backup and there's a good chance that I won't be able to run a Windows VM at all at some point in the near future.

  • Simple Screen Recorder no longer works. The Continue button is missing. I spent hours trying to install the very-latest version only to continuously run into problems compiling the package with Cmake.

  • Shutter has taken to not starting on system launch and occasionally crashing the system.

  • Pulseaudio has mysteriously decided to stop recognizing Chrome as an output stream, meaning that although I can connect my Bluetooth headset through Bluetooth Manager, I can't switch audio over to it - at least with this GUI.

  • Autokey has been great except when I try to add a new Unicode-based phrase, which crashes the whole system. I've wasted hours trying to come up with workarounds and attempting to debug with people on its users' Google Group.

I'm certain that there are a few more. And that if I knew more about Linux, or had more time to devote to resolving these issues, that I could fix some of the above. But I don't feel like I should have to.

Why do things have to be like this? It occurred to me yesterday that I would be more than happy to pay an annual subscription to a service that both guaranteed a level of customization that neither Windows nor MacOS offers, but also had some inherent stability so that bugs like this aren't par for the course. I'm not a poor student any more. But I still love Linux and the philosophy that underpins it.

Or perhaps asking for both stability and what we love about Ubuntu is chasing after the impossible.



From /u/deadbunny on reddit 11/2018:
+/-
... the Mint devs do many things badly.

Rather than type out a long reply here is a Debian dev explaining it:

"Linux Mint is generally very bad when it comes to security and quality.

First of all, they don't issue any Security Advisories, so their users cannot - unlike users of most other mainstream distributions - quickly lookup whether they are affected by a certain CVE.

Secondly, they are mixing their own binary packages with binary packages from Debian and Ubuntu without rebuilding the latter. This creates something that we in Debian call a "FrankenDebian" which results in system updates becoming unpredictable. With the result, that the Mint developers simply decided to blacklist certain packages from upgrades by default thus putting their users at risk because important security updates may not be installed.

Thirdly, while they import packages from Ubuntu or Debian, they hi-jack package and binary names by re-using existing names. For example, they called their fork of gdm2 "mdm" which supposedly means "Mint Display Manager". However, the problem is that there already is a package "mdm" in Debian which are "Utilities for single-host parallel shell scripting". Thus, on Mint, the original "mdm" package cannot be installed.

Another example of such a hi-jack are their new "X apps" which are supposed to deliver common apps for all desktops which are available on Linux Mint. Their first app of this collection is an editor which they forked off the Mate editor "pluma". And they called it "xedit", ignoring the fact that there already is an "xedit", making the old "xedit" unusable by hi-jacking its namespace.

Add to that, that they do not care about copyright and license issues and just ship their ISOs with pre-installed Oracle Java and Adobe Flash packages and several multimedia codec packages which infringe patents and may therefore not be distributed freely at all in countries like the US.

The Mint developers deliver professional work. Their distribution is more a crude hack of existing Debian-based distributions. They make fundamental mistakes and put their users at risk, both in the sense of data security as well as licensing issues.

I would therefore highly discourage anyone using Linux Mint until Mint developers have changed their fundamental philosophy and resolved these issues."

Source

Read the comments for more fun examples of how bad the Mint dev team are.

If you want to run a Debian-based system, run Debian or Ubuntu.

Edit: No they have not resolved any of these issues in the last few years since this was posted.

...

The main issue is that Mint doesn't care about security. To quote glaubitz again:

"On Debian, I open up Google and type "Debian CVE-2015-7547" and I am immediately presented with a website which shows me which versions of Debian are affected by the recent glibc vulnerability and which are not. You cannot do that on Linux Mint which therefore disqualifies itself for any professional use."

Due to the frankendebian issue mentioned in my previous post the fact that Mint uses Debian compiled packages (they don't compile themselves) they are reliant on Debian for any and all security fixes. If their frankendebian isn't compatible with the security patches made by debian (due to dependency issues) then you have to wait for Clem et al. to actually patch it themselves. Given their history of rejecting patches and their general security stance I don't have any faith in them to actually do things properly.

Mint also blacklist packages from updates, this means they won't get patched if there is a security update for them. While there is an option buried within Mint to allow these to update, this is not something a noob would be doing. This means your system could be vulnerable even when you think it's fully patched. That is unacceptable.

Mint's selling point is it's ease of use; unfortunately that ease of use comes from the devs having a willful disregard of licencing issues. They ship their ISO files with pre-installed Adobe Flash, Oracle Java packages as well as multimedia codecs (which people want) which violate intellectual copyrights and patents. Unless the maintainers of a distribution want to violate copyright laws intentionally and make themselves attractive targets for lawyers, there is nothing they can do to alleviate that. Debian and others aren't not shipping those packages because they want to make life hard for their users, it's because they cannot, legally speaking.

(This is the reason Debian forked Firefox and Thunderbird and distributed them as Iceweasel/Icedove.)

In this respect Ubuntu actually has licencing agreements which allow them to distribute third-party software through their official third party repos without violating the license terms of the software.

Dedoimedo's "Linux Mint 19.1 Tessa - Adrift"



From /u/gordonmessmer on reddit 12/2018:
+/-
There's a class of reasons that I dislike Ubuntu specifically. Ubuntu has at least three completely different installers, all of which use different sets of preseed commands. Documentation for Canonical's own installers is pretty bad. Automating Ubuntu installs for a large environment can be difficult, as a result. I think Canonical is a bad community member, with a history of competing with the community rather than contributing. They repeatedly offer applications which aren't as well supported as an application developed by the broader community, and then after a few years, shut it down. (Examples: Mir, Unity, bzr, probably snaps). If I build something new on top of a solution from Canonical, I'm probably going to have to rebuild it from scratch in a few years' time. Partially as a result, if you look at contributions to almost any major software project for GNU/Linux, Canonical is either very small, or absent completely. They're more of a consumer of Free Software than they are a contributor.



Lots of people say that closing the lid of a laptop to make it sleep, and opening to revive it, doesn't work well on Linux. Seems to be a common problem.
Linux laptop sleep
Apparently there is a long-standing problem with Linux reacting VERY badly to "RAM is nearly full": reddit post

Apparently there is a long-standing problem with Ubuntu and the ~/.Xauthority file that results in people unable to login.



From /u/TheChosenLAN on reddit 2/2020:
+/-
I was a full time Linux user for over a year (even bought a Dell Precision 5530 with Ubuntu preinstalled, to support the movement and have HW fully supported by Linux). But after all that time I had to go back to Windows. I just got really tired of tinkering around with my system. I just want something that is standardized and works fully out of the box.

On the effing Laptop, those were some gripes I had:
  • I used my machine as primary machine on work. That means I needed often go to meetings with the machine, internally and externally. Just take your Laptop, close it and throw it into a backpack, right? Noooo. Suspend doesn't work reliably, even after countless tweaks from the internet. Often times it initially appeared as if it was fully suspended, but after a couple of minutes it turned itself back on again. In my backpack. In a sleeve. It overheated so bad, I couldn't touch it for more that a fraction of a second without burning my fingers.

  • Wanna watch a movie over BT speakers? Well, sometimes BT worked, sometimes not. Oh and also it liked to turn the screen off after the normal timeout, despite me watching the movie in full screen on VLC.

  • Wanna have proper RAW image previews in the file manager? Download this package from the official repo. Oh, it's broken and crashes every time. Just manually install a newer version from a ppa and manually configure the thumbnailer service.

  • Oh you tried to use more RAM than the System has? Apparently the OOM manager in the kernel is buggy and without additional software like earlyoom, your system will just come to a screeching halt.

  • Oh hey, you wanna use more up to date packages than those from the standard repos? Either use a rolling release distro, which might break more often (I tried to boot manjaro and it crashed on hecking boot, even with failsafe graphics enabled). Or use a more recent version of (for instance) Ubuntu, which comes with GNOME which doesn't work anywhere as smoothly as Unity7 and also has random crashes for some reason. Oh, and the touchpad tapping doesn't register as reliably as on Ubuntu 16.04 on any other distro, even Ubuntu ones. Oh and don't get me started on KDE. It don't like its default appearance and I don't want\ to spend a couple of hours tinkering with my system to not have it look like trash.

  • Use a standard Logitech MX Master on your laptop over BT? No problem works fine. For 5 months. And then it suddenly starts lagging extremely with no remedy besides reinstalling the entire OS (or maybe debugging it for a couple of days but I don't have time for that or any interest).

  • Let's try out ElementaryOS since aesthetics of my system is important to me and it looks promising. Boot the live CD. Whoopsie, when I click 'reboot' nothing happens and when I click 'shutdown' I get a kernel panic. The heck?

  • Copy lots of large files to a USB drive. The progress bar moves instantly to 99% percent and you have no clue, how long it is going to take. Also, the progress bar may reach 100% and the file manager say "operation completed", but when I try to safely eject the USB drive it takes another 10 minutes! before I can unplug it since the data has actually only been copied 10%. During that time, a lot of file operations will be painfully slow or just not begin at all, since the disk scheduler or what do I know is pinned 100%. Apparently it's an issue with the buffer sizes or something. So just copy some configuration options into your sysctl.conf and now it actually works. But now some copy operations take waaay longer than before (even accounting for the 10 minutes additional waiting time) and always more time than on Windows.

Don't get me wrong. I still have a soft spot for Linux and think it is promising. And I fully understand and support you, if you are running it on your own systems. I love it as a software development platform. But I'm just tired of tinkering with my system and just want it to work. While having it not look like trash and have recent up-to-date software available and being stable. Windows has it's own slew of issues, but none of them are so nagging as the Linux ones. At least for me.

Disclaimer: Two months ago I had to swap my mainboard, because apparently the Intel GPU was defective (maybe an effect from the heat incident, who knows) and Windows (I was dual-booting at that stage) kept crashing because of it. So it may very well be possible that some of those issues appeared because I had a bad mainboard. But the thing is, I only discovered it because Windows clearly stated the crashing module in the BSOD - so I very quickly found the culprit. I have no idea if I would have found the source if I stayed purely with Linux.





Summarized from Neocities' "What Linux struggles with", and then more from the author:
+/- Some flaws in Linux [I omitted some items which are outdated IMO]:
  • Lack of video game support.

  • No error feedback: "When you run a program through the panel or start menu in Linux and it fails for some reason, you are not notified at all. You have to run it through the terminal if you want to see the error messages ..."

  • Software installation: "packages - of which there are many variants, all incompatible with each other."

  • No actual firewall: "In Linux, any application can connect whenever and wherever it wants to, while you are none the wiser. ... Windows has had better firewalls such as ZoneAlarm for a very long time ..."

  • And of course, choosing a distro is a struggle in itself that Windows users don't have to deal with.

[In email 10/2019:

Linux security is pretty much an illusion. An application can do what it wants in the folders it has permissions in - which usually is your whole home folder. Many distros run sshd by default on startup which allows any shmuck to try to crack your password. And some distros have really weak default passwords for root, which presents a real danger. I actually had it happen recently since I guess I didn't even think the root user is enabled on Slackel. Why the f*ck would you have sshd on by default though? It provides nothing but an entry point for hackers.

My article is old and since writing it I've had way more stuff to add in:
  • GTK3 apps don't look the same as the GTK2 ones. Add Qt on top of that and you've got three different looks. Terrible.

  • Editing the bootloader is a nightmare.

  • PS Vita [Sony PlayStation Vita] does not properly work with Linux.

  • Neither does Nintendo Switch ...

  • Or smartphones.

  • Certain applications use different save dialogs than the system-wide one. Which means your bookmarks will be ignored (I think it's the GTK3 vs GTK2 issue again, but not sure).

Many, many more. Windows is still worse though, so whatever. Not that it justifies this stuff, just shows how much of a swamp we're in.

]



From /u/DistroHopper101 on reddit 8/2020:
+/-
I went from macOS to Ubuntu last year, then came back to macOS. This year I gave a chance to Arch Linux and loved it! Stayed with it for about 4 or 5 months then ... I came back to macOS.

When it comes to specific desktop usage some things are really off on Linux. Some points that made me switch back:
  • The lack of good (imo) proprietary software. Apps like Devonthink, Banktivity, Omnifocus and Logic are really polished and unfortunately mac-only. Office apps and Adobe Suite are a huge deal too. This won't affect you if you don't care for non-FOSS apps.

  • Xorg is a hot mess. Remapping keyboards in linux is HELL! The following is based on personal experience: I've spent a whole week and a few more days learning about setxkbmap, xkb, xcape and xmodmap. Set up everything? Good! Want to switch from X to Wayland? Goodbye to all your cool keyboard hacks that you spent hours (maybe days?) programming. Nothing I know is even closer to Karabiner Events. It actually came to the point that my keyboard workflow on macos is way more productive than using i3/awesomewm. Example: my Caps Lock is a 3 mode modifier (Esc when I tap and release, command + control when I hold Caps Lock. This in combination with any other key becomes any function you want, and when I double-tap it and hold it becomes Control (very useful in vim, since this would be Esc and Control instantly). Regular keys can act as modifiers without disrupting the standard function key. e.g: If I press "S" key it outputs the "S" letter but if I hold it and press h,j,k,l it controls arrow keys. "D" letter in combination with h,j,k,l becomes my mouse cursor. Sane way of making custom deadkeys for accessing common characters used when I'm programming and the list goes on.

  • GUI fragmentation. I use the terminal a lot but Apple set really good Human Interface Guidelines for graphical applications. It makes the experience of using macOS more polished than the constant battle of GTK vs QT on Linux.

  • macOS has a great integration with iOS. I hate this phrase but things just work.


Carlos Fenollosa's "Fed up with the Mac, I spent six months with a Linux laptop. The grass is not greener on the other side"



From someone on reddit 11/2020:
+/-
So after seeing posts and comments along the like of "I installed linux on my grandmas PC and shes never been happier" I decided to try out linux on the desktop and boy did it make me appreciate windows more.

So off I went to get Ubuntu, right off the bat my wifi radios were dead, no issue. Let me just troubleshoot it in settings, but no linux has to use the fancy-pants terminal. Honestly why in 2020 am i expected to edit a text page with a bunch of commands just to get my wifi to work. If linux really ready to go mainstream why does support use the terminal so much?

They tell you about how linux runs on everything but that certainly isn't the case, there are compatibility gaps and telling people that it runs on everything is dangerous as people will only realize that after the fact they've installed it. Its not just the community even canonical the company that runs Ubuntu said the same thing in their post.

Then there's the issue with the peripherals, most things should work and it does but if you have anything a little bit specialized it doesn't. In my case the printer worked but the scanner did not and fixing all of these issues requires you guessed it going into the terminal.

Now this is fine if Linux is being pushed as an enthusiast OS but its not. All over the internet you see videos along the like of "I switched to Linux and you should too". Or comments along the line of "I installed Linux on my grandmas computer and shes been so happy". A cursory glance across YouTube and blogs makes you think that the rest of the world is full of idiots for sticking with windows.

But what they don't tell you is that linux breaks (all operating systems do) and when it does you're gonna be up the creek without a paddle, because there's no customer support line that you can call. I dont understand how people can push linux as a mainstream OS when there's no customer support. The best option you have are forum posts, but they've seen the same thing a million times and they ask you to read the effing manual (there are some great people on the forums), but forum posts are not a substitute for customer support. In the case of windows or mac you can at least call up a relative that uses the OS and get their help. But with ubuntu you might be out of luck.

Lets go back to that grandma example for a sec, Zoom is pretty common right now and gam gam wants to install zoom on the Ubuntu install you so graciously installed, she goes to the download page (linux one) and it needs you to put in the name of your distro, your architecture (64 or 32) and the specific version, on other OSes its as simple as hitting download and double clicking the installer. I mean you can walk your grandma through that process over the phone, "hey grandma hit that blue download button, hit that icon at the bottom and just click yes for everything else", but on linux its more convoluted.

I mean linux has had decades to become a fully matured desktop OS and at this point it just hasn't. And most of the benefits of linux dont even apply to everyday users,

Security, IMO windows is secure, if you dont do dumb sh*t on windows youll be fine windows defender is gonna take care of you.

You can get windows for free now from the Microsoft web site, sure you can't see the source code but most people dont care.

Privacy, Sure I'd love it if people were more privacy-conscious but looking at how popular facebook is most people dont care about privacy (they should). But for me I'd sacrifice a bit of my privacy to apple or MS to have a good reliable OS that does everything, if that's the price then so be it.

Wrapping up, Windows and Mac cater to the user, they work for the user, where as with linux it expects the user to do all the work. If you have old hardware that you'd love to keep using, then by all means try out a linux distro. But if you have modern hardware remember the grass is greener on the other side.
Response:
+/-
This is what happens when you allow engineers to act as designers. Everything "works", but nothing is easy to use or configure.
Response:
+/-
And when you do have an issue, a swarm of angry, frothing at the mouth hobbyist programmers will emerge from the walls and take three paragraphs to call you stupid but not at any point engage with whatever problem it was that you were having.

I wanted to make a 'Linux Flaired User Bingo' card with spaces like:
  • Ultra specific use case that's only applicable to 17 people on the entire planet
  • Linked to a 15 year old github project with only 2 contributors and 34,000 pull requests
  • Jargon used that has no google search results
  • User laughs about Windows not being able to perform some vague, unclear task
  • Actual help!
  • Stallman quote
  • Post history is 10% linux, 10% windows, 80% /r/conspiracy
  • Unbelievably knowledgeable 3 day old account that posts three times in one thread and never again
Response:
+/-
Sending people to the terminal to edit config text files in Vi/Vim/Nano/whatever just to get things to work is insanity, and I just want to make this point: Anyone who says they never had to google how to use a terminal text editor is lying, and if you have to google how to use a text editor, your text editor is sh*t.

No one ever googled "How do I write text in notepad".





From someone on reddit 12/2020:
+/-
Re: Biggest problems / frustrations with Linux ?

Linux has been seriously going on my t*ts lately, so...
  • Localisation is a f*cking mess. Some apps seem to use the system language, some seem to use the formatting locale, even though it's not set as a LANGUAGE but only as a format. And some just use whatever f*cking language you have installed, regardless of priority. If you're 100% English, it's fine, but if you mix locales or want to use additional languages, you're better ready to jump through some hoops.

  • Japanese input (and other Asian scripts, apparently) is an unmitigated disaster.

  • If I see one more app being described as "lightweight", I'm going to scream. It's just code for "awful UI". I have 32 GB of RAM and a 2080 Super. I don't need my PDF reader to be "lightweight", I need it to have enough buttons to be usable and not look like it was made in 1992.

  • Windows is like a pile of building blocks. It's kinda messy and inconvenient but if one block is bad, you just pick it out. Linux is like a tower of building blocks. Much nicer to look at and more stable but God help you if one of the bottom blocks fails. If Windows fails, you just ignore the error message and keep playing games. If Linux fails, you better don't have anything else planned this weekend.

  • Sometimes, sh*t just goes wrong without rhyme or reason. I wasn't even changing the volume, why did the sound stop working all of a sudden?

  • The community is obsessed with command line. Rearrange PDFs with a GUI-less command line tool? Have you been smoking crack?

On the other hand, I have to say that gaming on Linux works much better than I thought. I just wish there was a Linux version of Occulus Link.





Reporting Bugs



I'm struggling with bug-reporting in Linux.



Different parts have different procedures

+/-
  • Kernel: mailing-lists (multiple) only; have to figure out who the component maintainer is, and what the mailing list is.

  • Some projects/distros (e.g. Mint): unified bug-tracking and feature-requests and source-control (e.g. on GitHub), but with dozens or hundreds of components, and you have to figure out right component to file against. Many component areas are stale or inactive or placeholders.

  • Some projects/distros (e.g. Ubuntu): separate strategies for bug-reporting (UbuntuOne) and feature-requests (mailing list).




A given part may have a huge "stack" and you may have to figure out exactly where to report

+/-
Example: Pix app in Linux Mint 19 Cinnamon:
+/-
Not really in linear order, there are forks in here.
  • Pix app.
  • Part of Mint Cinnamon distro.
  • Part of Mint family.
  • Part of Ubuntu family.
  • Part of Debian family.
  • Part of XApps project.
  • Pix app is forked from gThumb app.
  • gThumb is part of GNOME project ?
  • Built on top of GTK ?
  • Built on top of X windowing and glibc ?
  • Built on top of Linux kernel.

Example: GNOME desktop in Linux Ubuntu desktop 20.04:
+/-
Not really in linear order, there are forks in here.
  • Desktop icon.
  • GNOME "Icons" extension.
  • GNOME desktop.
  • In Ubuntu 20.04 distro.
  • Part of Ubuntu family.
  • Part of Debian family.
  • Built on top of GTK ?
  • Built on top of X windowing and glibc ?
  • Built on top of Linux kernel.




From someone on reddit:
+/-
freedesktop.org is a project, which aims to reduce the fragmentation of Linux desktop. They work on interoperability and "host" software such as systemd and wayland. It used to be called X desktop group (XDG), but now they are killing off X11 ("death of Xorg" will be beneficial for the Linux desktop as a whole), so they "rebranded" themselves. GNOME and KDE work with them.

You don't send bug reports about anything to them. You can discuss "standards" stuff, e.g. new wayland protocols on their mailing lists.
...
GTK and clutter are GUI libraries developed by the GNOME team. Qt is a GUI library developed by the Qt company (KDE is using it). These libraries are used by various GUIs. Usually, the programmers using them are those who file bug reports about these.





My experience with Linux



I'm pretty happy with Linux, and Linux Mint and Ubuntu family. But there are issues ...

My issues with Linux in general

  • Install process is much too difficult for users moving from Windows or Mac. See "Linux Installer idea" section of my "Installing Linux" page.

  • Updating is done N different ways by apps, services, OS. Also too many package managers. See "Updating" section of this page.

  • Security mechanisms should have a unified GUI. See "firewalls / app control / security is a bit of a mess" section of my "Linux Network and Security Controls" page and maybe a bit of Secure because Linux" section of this page.

  • Fragmentation: too many distros, mainly. See "Fragmentation" section of this page.

  • GUI inconsistency. Sometimes it seems like every app has its own style of file-open dialog. Maybe there are styles KDE/Qt, GNOME/GTK, Electron, Java, then roll-your-own as in Thunderbird and Firefox ? Is there also a "portal" dialog style (set GTK_USE_PORTAL=1) ? Not sure. [Apparently GTK and Electron (version 12 and above) apps have the option to use system-native dialogs.]

  • Reliability. In Linux Mint 19.0 I had system freezes (locked up solid) until I removed Synaptics touchpad driver. Ubuntu GNOME 20.04 and MATE 20.04 were okay, but then in Kubuntu 20.10 I have freezes again [turned out: likely hardware problem]. Fedora 34 KDE (on new hardware) is good.

  • Some operations just can not be done in Linux (lack of software). Setting my HP printer to use Wi-Fi. Handling PDF documents that have XFA forms in them. I always have to have a Windows machine available.


My issues with Linux Mint 19 Cinnamon in particular

  • Apps in the repo sometimes are ancient versions, I guess because the Ubuntu LTS repo is being used. The custom default Mint apps such as Pix are current, I think.

  • Removing a USB drive is much more sensitive than in Windows; easy to cause a FAT* filesystem to become "dirty". And then Nemo file explorer doesn't report an issue when you mount a dirty filesystem, which is VERY bad behavior.

  • In Linux Mint 19.0 Cinnamon, I had "UI freezes" (underlying OS still running) or "complete freezes" (all dead) until I stopped using the Synaptics touchpad driver.

  • In Linux Mint 19.3 Cinnamon with 5.3 kernel, I'm getting occasional freezes again. Sometimes under high load using VeraCrypt (not sure if relevant) and an external disk, Nemo will crash or the system will freeze.

  • In several apps, including the standard apps xed and pix, printing to a European A4 printer does not work properly. If the document has content starting at the left edge, that edge will be cut off by the edge of the paper when printing. The apps involved have no "margin" settings in their print dialogs.


My experience 4/2019 after using Linux Mint 19 and 19.1 Cinnamon for about 8 months

+/-
My opinion: installing / updating / package managers is a mess:
+/-
I'm not happy about the variety of package managers and installers you have to use. I would like to deal with only Mint's Software Manager and Update Manager apps, but I also have to deal with FlatPak, Docker, Github, apt, pip (Python), bundler (Ruby), tar, npm (Node), yarn, more things I don't know the names of. Some of these are at a different level than others, I don't know.

Some apps (such as Atom) have different builds (of same release, I think) that work differently.

Updating is done in many different ways:
+/-
  • Through Update Manager.

  • Most apps that use plug-ins (e.g. Firefox, VS Code, Burp Suite, OWASP ZAP) update them inside the app, using some custom mechanism.

  • XnviewMP and Master PDF Editor check for updates internally and then you have to download and install them separately (not through Update Manager).

  • GNOME shell checks for extension updates and then you have to download and install them from the extensions site through the GNOME shell browser extension.

  • "Oh My Zsh" and npm check and update themselves at the CLI.

  • FoxIt Reader, Thunderbird seem to check and apply updates in a custom way.

  • Snap checks for Snap Store package updates four times each day and applies them automatically ? And Ubuntu updater doesn't tell you what snaps are being updated or any details about the updates.

  • The anti-virus packages all install cron jobs to update signatures, some (Sophos) also update the AV app that way.

  • Some apps (Atom, KeepassXC, OWASP-ZAP, more ?) notify you of the existence of updates, but then you have to download the update or go to the home web site and download the update or do "apt update" etc to get them.

  • Some apps (Windscribe, more ?) notify you of the existence of an update and then stop working, until you update them through Update Manager or elsewhere.


I had hoped Linux would have a more rational install/update situation than Windows does, but it doesn't.

Causes of this:
+/-
  • Cross-platform apps find it easier to roll their own internal update mechanism rather than use the different mechanisms on Linux, Windows, macOS, wherever else they run.

  • Cross-distro apps that need to update their database (e.g. security apps) find it easier to roll their own internal update mechanism rather than build and submit update packages for each different distro family and repo.

  • Simple one-dev apps find it too burdensome to build and submit packages for each different package manager type and distro family and repo.

  • Apps with internal add-ons and add-on stores roll their own internal update mechanisms for add-ons.

  • Older apps and services built before there were updaters/stores just used cron, and continue to use it.



Flatpak - a security nightmare

On Mint Cinnamon 19:

I reported a series of Nemo crashes, and within days a dev had fixed it and put out the new version. Not going to see that on Windows.

Scrollbars too thin, and I had to try a series of hacks to get them wider.

Often it's unclear where to report a bug. Is it a Mint thing, or an Ubuntu thing, or a Debian thing ? A Cinnamon thing, a GNOME thing, a freedesktop.org thing ?

Often it's unclear where to tweak something. Is it a theme thing, or a Cinnamon thing, or a GNOME thing, or a Mint thing ? Some apps using gtk 2.0, others using gtk3.0, and the config files are separate and with different naming.

My little MP3 player devices don't work well with Linux Mint 19; they worked fine on Windows. Connect via USB cable and delete a file, Linux says it's gone, MP3 player says it's still there. Might be related to Linux not supporting formatting in FAT16 ? But I think it happened even before I resorted to reformatting my MP3 players to get rid of "ghost" files.

The upgrade from Mint 19 to 19.1 was done through Update Manager, but the update didn't appear in the normal window, instead somehow you were supposed to notice that a new item had appeared in the Edit menu of Update Manager ! But the update went smoothly.

My issues with Ubuntu GNOME 20.04 desktop

  • Installer still as much of a mess as Mint 19.0 Cinnamon installer was, when it comes to partitioning, encryption, and swap. I was doing the simplest case (wipe Windows, use whole disk for Ubuntu) and still had confusion, errors, no idea what swap settings I was getting.

  • Ubuntu GNOME desktop is primitive and limited compared to Mint Cinnamon desktop.


My experience with Ubuntu MATE 20.04 desktop

Pretty smooth, I don't remember many problems.

My experience with Kubuntu 20.10 desktop

  • Frequent system freezes. Sometimes a couple per day, sometimes none for a couple of days. Likely a hardware problem.

  • Dolphin not as smooth as Nemo/Nautilus, mainly when it comes to handling USB devices such as smartphones and encrypted disks.


My experience with Fedora 34 KDE desktop

  • Still some Dolphin glitches when handling USB devices such as smartphones and encrypted disks.




My 1/2019 response to "will Linux ever reach 10% share of the installed desktop OS market ?"

+/-
To me, a big barrier to people moving to desktop Linux is the bewildering number of variations. Hundreds of distros, a dozen ways of packaging applications (package managers, then Docker, Flatpak, Snap, Appimage, etc).

I would love to see some consolidation inside each of the major distros. For example, some way that all the Ubuntu flavors (including Mint) could become one Ubuntu, and then at install time you pick DE and theme and list of installed apps. Same among the other major variants (Red Hat, Arch, Slack, Gentoo ?). That way someone moving from Windows or Mac really would be given 6 or 8 major choices, not 50 or 200.

And app developers and hardware developers and bug-fixers would have more focus, and less duplication of effort. Linux would get better and better.

...

Also, installation (partitioning and dual-booting) is a big barrier. Even with installers that try to make it easy, it's confusing. Certain options make things happen automatically, others require that user specify the partitioning. I installed Mint, wasn't clear how to get a swap file instead of a swap partition, if I chose encrypted /home then I had to do partitioning manually, etc. And user has to know if they have BIOS or UEFI.





Miscellaneous



Artem S. Tashkinov's "Major Linux Problems on the Desktop, 2021 edition"
Artem S. Tashkinov's "Why Linux/GNU might never succeed on a large scale"
Stefan Orvar Sigmundsson's "Improvements to the Linux ecosystem"
Nick's "7 Unpopular Opinions about Linux"



Some Linux people seem to think it's enough that desktop Linux keeps improving. Somehow that will make it grab more market share. But Windows and Mac and ChromeOS and Steam/Proton and Fuchsia/Zircon keep improving too; they're not static targets. For example, on Windows, with WSL2 and VS Code, Microsoft is working hard to let developers never have to run desktop Linux. Apple's M1 and AirPods and Watch are very successful, showing that Apple is not stagnant.



My ideas for standardization

+/-
Are there standards for:
  • How a system UI theme is defined and where the settings are stored ? Today it seems the UI varies from app to app depending on whether it is built on Qt, GTK, Java, Electron, etc, and often does not follow the system theme.

  • Names of standard system apps: Settings, Update, Software Sources, Network Manager, Software Store, etc ? And within Settings, standard names for the top categories at least ?

  • Names of standard UI pieces: pane, panel, widget, workspace, desktop, launcher, dock, etc ?

    Standard naming would make it a lot easier to document/discuss/teach how to do things, in a somewhat distro-agnostic way.]

  • Some things (secret agent, D-Bus) seem to have low-level API well-defined but then no high-level standard. For example, I think secret agent has no standard for names of fields in a password manager entry. I think D-Bus has no notion of "default app for doing X" [I'm told "org.freedesktop.Secret.Collection.CreateItem" to tell password manager to save password]. I could be wrong about both of these things; I've only dabbled in each area. Also, somehow password managers (e.g. KeePassXC) should be equivalent to the "wallet" (e.g. KWallet, Seahorse) built into the DE. Again, I don't know much about this area, maybe I'm wrong.

  • Somehow it seems onion and other protocols or TLDs are not first-class citizens in the OS. Password managers know how to send HTTP or HTTPS links to default browser, but not how to send .onion URL to Tor Browser or Brave or wherever the user wishes. Similar for other protocols such as IPFS or I2P.

  • Some standard for major modules that could be shared across apps: spell-checker, syntax-highlighter, web-extensions (such as uBlock Origin). I have several browsers and several IDEs installed in my system. Why can't they all share a common set of modules that I install once and tweak once ?

All of these interfaces and names could be standardized without losing "choice" and "freedom" in the Linux world. Standardizing them would reduce friction, reduce duplicate effort, make things easier for users.



John Paul's "What is the Difference Between the macOS and Linux Kernels"
Sohail's "Linux Kernel Vs. Mac Kernel"

Julio Merino's "Windows NT vs. Unix: A design comparison"



StatCounter's "Desktop Operating System Market Share Worldwide"



Windows has plenty of problems:
Den Delimarsky's "Windows Needs a Change in Priorities"



Benno Rice's "What UNIX Cost Us"
TL;DR: Many features of Linux (everything-is-a-file, C, do-one-thing-and-do-it-well) came from the past, and seem unquestionable. But things (hardware, internet, languages) have changed, and maybe there are better ways now. At least we should understand the history.

Timothy Roscoe's "It's Time for Operating Systems to Rediscover Hardware" (video)
TL;DR: The traditional model of a computer (PC or Vax, single uniform physical address space, as appears in textbooks) no longer applies; we don't have a single CPU and an OS that controls every detail of everything. We have SoCs and GPUs and BMCs and motherboards with many, many processors (and not just CPU cores) running loads of proprietary firmware (some of which qualify as OS's themselves), using many internal caches and internal buses and different physical address spaces. Much of this (even in the main CPU, the SoC) is completely hidden from the main OS. This has huge implications for security, for one thing. And features such as power management may be done in a low-level subsystem that isn't visible to the OS, and each has to guess about the other, each has its own set of policies. All kinds of bugs and unexpected behaviors can hide in such complex systems, and be hard to fix.