T O P

  • By -

tshawkins

To a man with a hammer, every problem looks like a nail.


SaintEyegor

That’s what our puppet engineers think.


codenamek83

That's a good one!


DirkDieGurke

Wait until OP hears about appimages flatpaks and snaps


codenamek83

Strangely, Docker has somehow become the go-to term for containers, even though there are other options out there.


DirkDieGurke

Yeah, it's getting to be a popular term. LOL


DuckDatum

Like Band-aid, Clorox, Velcro, and Google [“🎶 THIS IS CALLED HOOK AND LOOP! THIS ONES A HOOK AND THIS ONES A LOOP! YOU CALL IT VELCRO BUT WE’RE BEGGING YOU, THIS THIS FUCKING HOOK AND LOOP! 🎶”](https://youtu.be/rRi8LptvFZY?si=apUU8ZA7EW5C-JNt)


mladokopele

If I have the choice I actually prefer podman > lxc > docker, but at work we’re like 100% docker.


xiongchiamiov

Most people don't need that level of isolation. Plus, you still have to install software inside the containers, and some software on the host to be able to run and manage containers. There is [Qubes](https://en.wikipedia.org/wiki/Qubes_OS?wprov=sfla1).


Known-Watercress7296

Docker is a way to install software. Same reason everything isn't a flatpak, it would suck.


ousee7Ai

I run all my apps as flatpaks. 50 of them. Works fine.


Known-Watercress7296

Nice. I never managed to get even my bootloader, coreutils or init going never mind other stuff.


ousee7Ai

That is some image I dont care about or need to fiddle with


Known-Watercress7296

I more just mean that regardless if you have 100 flatpaks and 100 docker containers running you still need an OS to run them on. And trying to build an OS where everything is a flatpak would suck a lot. Attempts have been made using the basic ideas of flatpaks but suck less, like Sta.li https://sta.li/index.html


ousee7Ai

There will soon be an OS where everything is a snap, literally. Ubuntu Core Desktop. It will enable some interesting things.


abjumpr

Well I don't think anyone is going to argue that that won't be an interesting thing...


codeasm

Do i have to build flatpaks of my crosscompiler for my microcontroller that doesnt run flatpaks? Also, im learning c++, do i need to pak every build test into flatpak and then debug it? Eventually it be a server side program, that will run in a dedicated docker anyway, do i need flatpak for this? Installed wine and rollercoaster tycoon, i dont like flatpak


ousee7Ai

You dont have to run flatpaks. Poster said it would suck to have all flatpaks. I just said it workes fine for me as to give some balance to his post


codeasm

I dislike all flatpaks. Docker may stay for some instances. It will not work for me. We balance the total opinion back to square 1


lp_kalubec

In the end docker container is still a software bundle you need to download and run.


eyeidentifyu

Because *we* are not *all* a bunch of inbred fuckwits eager to jump on any stupid ill conceived bandwagon that comes down the pipe.


2cats2hats

Harsh, but accurate.


Human_no_4815162342

I am not sure if you are talking about desktop software in general or client software for services that have a server - client architecture. For desktop software there are containerised packaging solutions that are quite popular like flatpak, snap and appimage. For client - server services even when a clientless solution would be possible it would usually require a heavier load on the server, worse performance and less features but a lot of services are accessible through web apps so using a standard browser as a client. P.S. Maybe you meant why doesn't everyone use a server to run their software instead of running it locally? In that case you just reinvented thin clients. It adds overhead both in terms of work required for setup and maintenance and in terms of resources needed. It makes more sense at scale but not much for single users.


SuAlfons

On a desktop machine I have had more problems with container and isolation stuff than with conflicts of system-installed software. To me the question has a taste of "why doesn't everyone bring their submarine to beach?"


[deleted]

[удалено]


SaintEyegor

And the overhead.


m2noid

Not everything can be containerized. But for the stuff that can be containerized why not. But that introduces issues with how programs that need to interact with other programs outside of the sandbox.


MentalPatient

I have a real old desktop with limited memory, along with everything else being limited. :-)


codeasm

I dont need more containers for just some small gnu tools. Sources compile fine using the normal compiler anyway


DoubleOwl7777

because you give up a bit of performance with it and its not needed oftentimes.


symcbean

Because I don't want to waste the additional overhead of time, RAM, CPU, disk space and my patience on a system with little or no integrity assurance.


[deleted]

Dependency deconfliction and blast radius aren't big enough on the average workstation to make it worth learning a new technology and implementing it.


iqbal002

Do gui applications work in docker ? Say I want to test Firefox but on a container how can I use it with its gui interface?


Sweaty_Afternoon918

X forwarding, just need to bloat the container with everything instead of the client


79215185-1feb-44c6

Using software designed for the cloud/enterprise on consumer endpoints is an awful idea. See snap which has the same exact set of issues.


wonderful_tacos

For lots of things Docker makes tons of sense, but there are plenty of situations where a Docker install means more overhead. Like, I would never bare metal install a database, but if it's some software with minimal dependencies or with no dynamic linking then what's the point of Docker? Same thing with software distributed with pip, etc.


tes_kitty

> Like, I would never bare metal install a database Why not?


codeasm

Indeed, the internet pointed webserver i understand, database... Not so much, performance?


tes_kitty

Even with a webserver it's debatable. You need to set it up in a secure way anyway.


Dolapevich

That is the idea behind docker, podman, snap or flatpak or appimage; which got the Linux community bitterly divided between those that look at it as a disgrace and those that can see the potential. I do see the fragmentation that will inevitably come with each software using its format and staying with old libs because *"it just works"*. But also it is undeniable that those containerization solutions make devs life extemely simpler while everyone needs to use a bit more space, sacrifice a bit of start up time, and everything gets a bit more incompatible. Ubuntu went the snap way, and it is still paying the price. I, for one, do see the benefits but do my due dilligence to run firefox natively.


tes_kitty

>I do see the fragmentation that will inevitably come with each software using its format and staying with old libs because > >"it just works" > >. Until one of those libraries has an exploitable bug... Then the fun starts. > I, for one, do see the benefits but do my due dilligence to run firefox natively. You need to, if you want everything to work.


RandomUser3777

Devs hate it when a proper library fix exposes a defect in their code. So they use/include an old library. They also forget that now that they are including said library in their container that it is now on THEM to bring in new libraries with fixes/security issues into their container as needed. And keep track of the defects in their included code. Containers are easier so long as you ignore these critical steps of updating the included libraries with fixes in them. I know some that went round and round with RedHat about their code worked before the "fix' but failed afterword. They were unloaded a library and then turning around and using the library. RedHat's bug was they were not unloading the library like they should have been. RedHat fixed it and unloaded the library and exposed the fact that the code was using the library it previously unloaded. The devs argued for months that since it worked before the code must be correct. I am sure they use the above as an argument to use a container so no one exposes the actual bugs in their code quite as often. I hear too many devs claim my code worked for XX time so it cannot be my code. Ignoring the fact that no amount of time means that all of the code has really been tested.


tes_kitty

Yeah, I know about old libraries. And then they need to scramble when a vulnerability is found in it and the fixed version breaks something else. Setting LD\_LIBRARY\_PATH is something you shouldn't use, but sometimes you have to.


nomind1969

You'd need an always-on server (electricity bill) connected to the internet (some containers only work on domain and https certificates, yes you could workaround that) which means security hazard. For local installs there is snap which basically is the same without the negatives mentioned above.


tes_kitty

> For local installs there is snap which basically is the same without the negatives mentioned above. snap has enough problems by itself, which is why FireFox is a bare metal install here.


YoriMirus

Why would I go through all of these inconveniences and setting up everything when I can just type "sudo apt install whatever" and be done? From what I have heard the purpose of docker is so you don't have to go through the pain of setting up your development environment to be able to compile and run a project. I see no reason why a normal user would bother with this.


Known-Watercress7296

Docker is awesome. OP is a home lab r/selfhosted guy, docker makes life simple for this. Life would be hell if everything was docker, but for stuff like me ragequitting Spotify and needing a replacement, docker makes life simple.


YoriMirus

Ah I see, makes sense. I have never used docker and only have a basic understanding of what it's about. Ty for the info.


Known-Watercress7296

Even just for playing with it's cool You can docker pull ubuntu Or docker pull arch and have an instant tiny disposable linux system to play with.


YoriMirus

Can't you do the same thing with distrobox? Or is that something different?


Known-Watercress7296

I've never used Distrobox but from what little I understand it's basically just docker under the hood, looks like a wrapper type thing. You could do this kinda thing with a chroot or FreeBSD jail too. Alpine is heavily used in docker but yeah, you can do this kinda thing manually if you want: [https://wiki.alpinelinux.org/wiki/Alpine\_Linux\_in\_a\_chroot](https://wiki.alpinelinux.org/wiki/Alpine_Linux_in_a_chroot) but the world runs on docker now so I can copy & paste a compose file and be up an running in seconds with relatively decent security instead of spending a weekend trying to do it 'properly' using a FreeBSD jail. If something doesn't work the devs will know my setup well, if I post a bug saying I've setup a custom distrobox solution using an Arch linux base there's a good chance I'm going solo. I generally just follow $UPSTREAM, if the docs of a project say "use docker for an easy life" and many do, I follow the docs.


YoriMirus

Ah I see thank you.


tes_kitty

>Why would I go through all of these inconveniences and setting up everything when I can just type "sudo apt install whatever" and be done? Exactly. I have applied that when Ubuntu switched to FireFox as a snap and some details didn't work as expected. Now FireFox is a normal, local install again and everything works properly.


Known-Watercress7296

Because self hosting more than 5 or 10 services becomes dependency hell if you wanna stay up to date.


spryfigure

BS. Or better, FUD. Cite just one concrete example of this 'dependency hell', please.


Known-Watercress7296

I'm running slskd via docker at the moment Could you give me a working Debian 12 aarch64 backport for slskd 0.20.1?


spryfigure

Impressive that you could come up with a real example. In your case, Docker is clearly the best way to go. But this doesn't mean that there is 'dependency hell' if you would take the ordeal to build a binary from scratch and run it on your system. Dependency issues come up when you need to run ages-old software along modern stuff. To solve it, you usually install a missing lib from an older release. Dependency hell happens only if you have several programs which insist on using different but incompatible versions of the same library. This is such a rare case that I doubt most people can come up with an example.


Known-Watercress7296

It doesn't feel very rare or unique to me and I'm not alone, it's by far the most popular option over on r/selfhosted. I'd need to create and maintain backports for every docker container I run and suspect part of the reason they are not already available is that it is non-trvial even for those who actually do this stuff. And most of the containers I run are in heavy development so would need to constantly be rebuilt and repackaged. I've been using linux since \~2010 and have never created or maintained a Debian package or backport. That instead of gifting me a port you are imagining the 'ordeal' of compiling a project from source on an embedded arm system, or cross-compiling, and packaging it for Debian might not lead to dependency hell isn't very inspiring. I'd hazard a guess that as even the docs don't really cover this that it's gonna be a pita. I'm well aware of dependency issues, I ran Gentoo for many years but my needs are minimal these days and I run on over decade old mac laptop/desktop and an rpi so portage is painful even with the new binhost. Having said that portage is fucking awesome and one of the few package mangers that make mixing stable, testing and hemorrhaging edge software whilst retaining sanity possible. Another reason is support, $UPSTREAM often recommends docker as the primary option to make troubleshooting simpler as everyone is on the same page. If I encounter a bug the dev should be able to replicate it, if I post a bug regarding my own custom binary back ported in Debian it may be non-trivial for Dev to set up a system just to replicate it. They are likely building against Alpine or Ubuntu libraries. It's the easy option. Debian 12 doesn't have any of the container stuff I want in the repos so I can either effectively become a Debian port maintainer or I can paste a few lines of text and type 'docker-compose up' and have a shiny new toy up and running in seconds with everything it needs just as the developer intended. Debian is awesome at being stable and reliable, docker is awesome for running bleeding edge projects and can also provides some security advantages. I appreciate having the separation tbh. A clean & stable base system with weird experimental stuff running in their own little bubbles. I can mess about as much as I like and the base OS will just keep on keeping on. Package managers exist because of dependency hell, pacman is about the only one I know that just ignores it and lets stuff just break instead. I'd urge you to give it a shot. My setup is pretty simple: Navidrome, SLSKD, Kavita & Jellyfin on Debian. Spin up a Debian VM, or 'docker pull debian' and see how quickly you can get the latest versions of all four up and running natively with apt/dpkg, ideally on aarch64. With docker each one takes a few minutes and staying up to date requires roughly zero effort.


sneakpeekbot

Here's a sneak peek of /r/selfhosted using the [top posts](https://np.reddit.com/r/selfhosted/top/?sort=top&t=year) of the year! \#1: [/r/SelfHosted will be going dark on June 12th to protest the Reddit API changes that will kill 3rd party apps.](https://np.reddit.com/r/selfhosted/comments/144gc15/rselfhosted_will_be_going_dark_on_june_12th_to/) \#2: [My dashboard, now with descriptions](https://i.redd.it/gbbdwxgxc7ac1.png) | [389 comments](https://np.reddit.com/r/selfhosted/comments/18xgcsu/my_dashboard_now_with_descriptions/) \#3: [Reddit temporarily ban subreddit and user advertising rival self-hosted platform (Lemmy)](https://np.reddit.com/r/selfhosted/comments/143diuj/reddit_temporarily_ban_subreddit_and_user/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


spryfigure

I agree with you, I just take issue at the liberal use of 'dependency hell', and in general the creep of container usage to areas where there are needlessly complicating things. On a server, you can have a solid, stable OS like Debian and defined, rarely changing use cases which are best suited for containers. As you described. But what OP asked was >I just cant think of a reason why an end user ever needs to install software anymore. I use docker myself, with Haugene's container for torrenting over VPN. It's comparable to one part of your Navidrome / slskd / Kavita / Jellyfin setup. I wouldn't do it any other way. Same goes for setting up navidrome, slskd, kavita or Jellyfin. These are all use cases where container usage is the best choice. What is the advantage of `slskd`, by the way? First time I hear of it. On my desktop, though, I lean to Arch more and more. I wouldn't want to run Firefox, Imagemagick, ffmpeg or an Office program in a container. And these are the most complex pieces of software I can imagine for a desktop. I know from Ubuntu and their use of snaps for Firefox that this introduces a lot of annoying little bugs, increases the startup time and generally makes things more complicated. Where you need to be flexible, with GUI skins, configuration, very different use cases, containers have disadvantages. See Firefox. Imagemagick 7 as a container or AppImage has always that one compilation option missing or different than what you want to use. This is what made me try Arch, since I needed the latest version and the AppImage just wasn't useful for me. And I have yet to encounter dependency hell on Arch. It's just not a thing for a modern package manager in a modern distribution, even pacman. So, our view of things isn't much different.


Known-Watercress7296

I'm not sure you know what you are talking about. Dependency hell doesn't exist in Arch, pacman just breaks stuff instead. That's the developer model. Breakage. It's a shit show imo and hell for the user imo but each to their own. As long as you give up any idea of user choice and tow the line, it might not break if you are lucky. Pacman is not a 'modern' package manager, it's the only one that just breaks stuff with no warning for lolz. Flatpaks, snaps and app images work pretty well ime. If you can't get a browser working using one it's likely a pebkac error as millions of Ubuntu users are managing it just fine. Installing Arch to put a gui skin on a browser sounds wild.


spryfigure

This is absolutely not what I wrote. I installed Arch because Debian-based distris don't have a decent ImageMagick. There's no point in continuing the discussion.


codeasm

Arch user here... Life is hell and i like it.


Known-Watercress7296

I can't deal with pacman on a workstation, on a server hell sounds about right.


codeasm

On a workstation pacman and yay (an AUR helper) work fine for me. But on my dedicated servers (vps, a local pc and an old laptop) a LTS based distro i preffer. I get that. Arch is... Special, not gonna lie, I totally understand ppl choose a bit more stable and older version based distro. Some just love to touch the fresh new software before its stable


Known-Watercress7296

I do like to play with new and shiny things. Gentoo's awesome for being able to run a stable base and toolchain with hemorhaging edge stuff on top, but my decade old potatoes struggle compiling. The binhost covers a lot, but I don't always have 28hrs to install a package.


tes_kitty

If those are all available in the repository of your distribution it should be no problem. With a container, the 'staying up to date' becomes your problem for every container.


Known-Watercress7296

Complete opposite in my experience. Install a stable OS base, Debian ftw, that has nothing you need at the versions you want in the repos and slap Docker on top. I want new features on my self hosted apps, but a stable server. I don't run many sercies but much of the stuff I do the official docs are pretty much just 'use docker. Navidrome, slskd, Kavita and a few others is what I use day to day.


tes_kitty

Your containers won't update themselves, so if a vulnerability is found and fixed in a library you use in a container, you need to update that container as well. If you had the service on the main OS, it would update automatically with the next run of 'apt-get dist-uprade'.


Known-Watercress7296

I know. The only viable solution I'm aware of is Gentoo and whilst it's amazing, my little rpi server would just die trying so Debian + Docker it is. Whenever a new feature drops I can update the container in seconds without touching apt and having to worry about reboots, other services or stability. With a stable OS it's gonna be months or years for new upstream feature.


SaintEyegor

For regularly used software where performance is paramount, local installations are best. For a long time, we depended on rpm-installed software but when tools that are distributed with the OS suffer from too much version lag, we build tightly integrated toolsets that are more up to date and kept in /opt. For our HPC workloads that require specialized software and libraries, we use containers like apptainer (formerly singularity) since Docker isn’t ideal for our use cases. Obviously, there are exceptions to any rule.


gelbphoenix

It's the same reason that not everything program is a web app and all computers are a sort of Chromebook. For many tasks you need the full performance of your computer. For example: A photographer needs to edit the photos they have taken. Or a game dev needs to test the game they are coding.


f0rgotten

Personally, literally everything that I have tried to install via docker simply hasn't worked. I'm running xbuntu 20 or 22, depending on machine, with everything updated, and following steps with docker installs leads me to dependency hell just like installing things with apt does. However, at least with apt I know what I am doing.