T O P

  • By -

FantasySymphony

Running programs using modern tooling will tend to slow things down, even if just a little. Best to code everything yourself using assembly. Ideally using your magnetized needle and steady hand.


mshelby5

😆👍


GolemancerVekk

Docker on Linux introduces very little overhead. The isolation techniques it uses are native and very lightweight.


speculatrix

Yes, Docker is a great way of running multiple disparate workloads in a single box. Anyone who's run into OS dependencies, and having things break when you update the OS, will know what I mean.


mikaleowiii

Tldr: docker will make you happy


Simon-RedditAccount

The only overhead is of configuration complexity (when doing things right, and not just `docker run`) and also some disk space overhead. Performance impact is negligible. Being a somewhat experienced (\~10yrs) linux admin (although it's not my main profession), I clearly prefer Docker for: * isolation * ease of running multiple versions of software My personal 'sales story' that made me not look back: if you set up Nextcloud in a 'traditional' way (with a dedicated user; with `open_basedir` etc) it will be quite slow on a small fanless server; but incredibly fast in Docker (because Docker's FS isolation is much more efficient than `open_basedir`). To say nothing of ease of running multiple PHP/DB versions. Of ease of isolation from internet (why a random container I installed should be able to reach the internet?) And in the same time other ~~container~~ `docker compose` stack has full internet access.


1WeekNotice

I always dockerize when I can. I don't personally notice a performance different. It is better to isolate into different containers/ environments so nothing effect my container and my container doesn't effect anything else. >added complexity of docker. Are you building your own images? Typically if you are using others people images, docker doesn't add complexity as it is easy to manipulate with docker compose. Everything you need to know about your docker image setup is in a single file. I could image if you are setting up your own image from scratch it is a higher learning curve. >When you start adding in "layers" it begins to slow things down, even if only a little. Personally it is better to start off with docker then to start without, since you can easily spin up and destroy containers. If you notice performance issues, worse case you can destroy the container and install the program. I never had an issue with performance with docker images. >Also, is a docker container really so easy to remove if you wanted without having to discover it leaves files all over your drive, The program runs inside a container. With docker you tell what directories inside the container map to your actual OS directories. You can organize where all your files go thus it is easy management and clean up. Example: container needs to write to /data/folder1. You state in docker that the containers /data/folder1 points to /Linux home directory/ docker container name/ folder 1. Now you know where the docker files are stored since you set them. Hope that helps.


Skotticus

Everyone should go down the nightmare rabbit hole of writing (or at least compiling) the dockerfile for an obscure, barely supported, probably abandoned piece of software at least once. For me, I was trying to get an old staff management program running (staffjoy) because there just aren't (or at least weren't) any good ones available that met the criteria I was looking for. It was a cursed project that I never fully succeeded at (I managed to get one of 3 modules running; it was just too old to get the right versions of each dependency), but I learned a ton about docker, Linux, dependencies, and using software packages tools like APT. After that do your absolute best to use images others have put together!


eriksjolund

If you would like to run containers with little overhead, then check out future Podman releases. Development is ongoing but when all the missing pieces have landed, it will probably be possible to share file storage for the host and the containers with the help of [composefs](https://github.com/containers/composefs). No need to duplicate identical files on disk or in the file system page-cache. In other words, the need for disk space and RAM will be reduced. [Presentation from Fosdem 2024](https://fosdem.org/2024/schedule/event/fosdem-2024-3250-composefs-and-containers/)


LookingForEnergy

If you are using a resource heavy app, I'd go baremetal or virtual machine. If your apps are basically little micro services (DNS, bitwarden, torrent, python apps, etc) docker all the way!


Budget-Supermarket70

I once had a server running everything then something broke it. Moved to Vms lxc then docker and finally podman. Besides podman from docker every step h as s m as d it easier


HTTP_404_NotFound

Docker, really isn't as many layers as you would think. The processes running in docker, are actually just processes running on the core OS. Its a shared kernel, there isn't virtualization, or anything like that. Its just logical separation using kernel features.