this post was submitted on 28 Apr 2025
33 points (94.6% liked)

Selfhosted

58417 readers
639 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
33
What is Docker? (lemmy.world)
submitted 11 months ago* (last edited 11 months ago) by Jofus@lemmy.world to c/selfhosted@lemmy.world
 

Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

top 50 comments
sorted by: hot top controversial new old
[–] grue@lemmy.world 20 points 11 months ago (12 children)

A program isn't just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

[–] Scrollone@feddit.it 7 points 11 months ago (14 children)

Isn't all of this a complete waste of computer resources?

I've never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I'm a bit afraid.

[–] EncryptKeeper@lemmy.world 10 points 11 months ago* (last edited 11 months ago)

If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

[–] sugar_in_your_tea@sh.itjust.works 5 points 11 months ago

The main "wasted" resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won't notice the extra storage or RAM usage.

VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.

[–] possiblylinux127@lemmy.zip 4 points 11 months ago

Docker has very little overhead

[–] PM_Your_Nudes_Please@lemmy.world 4 points 11 months ago

It can be, yes. One of the largest complaints with Docker is that you often end up running the same dependencies a dozen times, because each of your dozen containers uses them. But the trade-off is that you can run a dozen different versions of those dependencies, because each image shipped with the specific version they needed.

Of course, the big issue with running a dozen different versions of dependencies is that it makes security a nightmare. You’re not just tracking exploits for the most recent version of what you have installed. Many images end up shipping with out-of-date dependencies, which can absolutely be a security risk under certain circumstances. In most cases the risk is mitigated by the fact that the services are isolated and don’t really interact with the rest of the computer. But it’s at least something to keep in mind.

[–] Nibodhika@lemmy.world 3 points 11 months ago

It's not. Imagine Immich required library X to be at Y version, but another service on the server requires it to be at Z version. That will be a PitA to maintain, not to mention that getting a service to run at all can be difficult due to a multitude of reasons in which your system is different from the one where it was developed so it might just not work because it makes certain assumptions about where certain stuff will be or what APIs are available.

Docker eliminates all of those issues because it's a reproducible environment, so if it runs on one system it runs on another. There's a lot of value in that, and I'm not sure which resource you think is being wasted, but docker is almost seamless without not much overhead, where you won't feel it even on a raspberry pi zero.

[–] dustyData@lemmy.world 3 points 11 months ago

On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system's libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

Don't be afraid of it, it's like Lego but for software.

load more comments (7 replies)
load more comments (10 replies)
[–] CodeBlooded@programming.dev 11 points 11 months ago* (last edited 11 months ago)

Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a Dockerfile, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.

This instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?

On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.

[–] Black616Angel@discuss.tchncs.de 10 points 11 months ago (1 children)

Please don't call yourself stupid. The common internet slang for that is ELI5 or "explain [it] like I'm 5 [years old]".

I'll also try to explain it:

Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
It's called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

You then need the software docker (or podman, etc.) to run this container.

Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
This way the software can't destroy your system and you can't accidentally destroy the software inside the container.

[–] entwine413@lemm.ee 3 points 11 months ago (1 children)

It's basically like a tiny virtual machine running locally.

[–] folekaule@lemmy.world 7 points 11 months ago (1 children)

I know it's ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

For example, containers are disposable cattle. You don't backup containers. You backup volumes and configuration, but not containers.

Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

For self hosting maybe the difference doesn't matter much, but there is a difference.

[–] fishpen0@lemmy.world 4 points 11 months ago (2 children)

A million times this. A major difference between the way most vms are run and most containers are run is:

VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

load more comments (2 replies)
[–] edifier@feddit.it 9 points 11 months ago

..baby don't hurt me.. No more..

[–] Ozymandias88@feddit.uk 5 points 11 months ago

I don't think I really understood docker until I watched this video which takes you through building up a docker-like container system from scratch. It's very understandable and easy to follow if you have a basic understanding of Linux operating systems. I recommend it to anyone I know working with docker:

https://youtu.be/8fi7uSYlOdc

Alternative Invidious link: https://yewtu.be/watch?v=8fi7uSYlOdc

[–] Wytch@lemmy.zip 3 points 11 months ago

Thanks for asking this question. These replies are so much more helpful in understanding the basic premise than anything I've come across.

[–] xavier666@lemm.ee 2 points 11 months ago

Learn Docker even if you have a single app. I do the same with a Minecraft server.

  • No dependency issues
  • All configuration (storage/network/application management) can be done via a single file (compose file)
  • Easy roll-backs possible
  • Maintain multiple versions of the app while keeping them separate
  • Recreate the server on a different server/machine using only the single configuration file
  • Config is standardized so easy to read

You will save a huge amount of time managing your app.

PS: I would like to give a shout out to podman as the rootless version of Docker

[–] LittleBobbyTables@lemmy.sdf.org 2 points 11 months ago

I'm not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it's looking to solve. I'll try and keep it simple.

Imagine you have a computer program. It could be any program; the details aren't important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend's computer.

Reproducibility is really important in computing, especially if you're the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

Docker helps massively simplify this dilemma by running the program inside a 'container', which is basically a way to run the same exact program, with the same exact operating system and 'system components' installed (if you're more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn't have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this 'reproducibility' problem.

Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won't compile. And then you'd run into the same exact problem where it compiles on your machine, but not your friend's. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

Also, since Docker puts things in 'containers', it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don't affect your 'host' computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer's timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

There's plenty of other things that make Docker useful, but I'd say those are the most important ones--reproducibility, ease of setup, containerization, and configurable permissions.

One last thing--Docker is comparable to something like a virtual machine, but the reason why you'd want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.

[–] PhilipTheBucket@ponder.cat 2 points 11 months ago (3 children)

Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

So then the whole rest of the world said "Hey Google's doing that and they're super smart, we should do that too." So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don't really understand.

So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don't worry they're all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who's not using Docker must be an idiot, which is the primary purpose for technology as far as they're concerned.

But anyway in the meantime a bunch of FOSS software authors said "Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they'll just hit the button and make it go."

And so now everyone uses Docker and it's a pain in the ass to make any edits to the configuration or setup and it's all in this weird virtualized box, and the "from scratch" instructions are usually out of date.

The end

[–] i_am_not_a_robot@discuss.tchncs.de 2 points 11 months ago (1 children)

Borg / k8s / Docker are not the same thing. Borg is the predecessor of k8s, a serious tool for running production software. Docker is the predecessor of Podman. They all use containers, but Borg / k8s manage complete software deployments (usually featuring processes running in containers) while Docker / Podman only run containers. Docker / Podman are better for development or small temporary deployments. Docker is a company that has moved features from their free software into paid software. Podman is run by RedHat.

There are a lot of publicly available container images out there, and most of them are poorly constructed, obsolete, unreprodicible, unverifiable, vulnerable software, uploaded by some random stranger who at one point wanted to host something.

[–] PhilipTheBucket@ponder.cat 2 points 11 months ago

Are you saying I was being silly?

You might be onto something

[–] tuckerm@feddit.online 2 points 11 months ago

I'm an advocate of running all of your self-hosted services in a Docker container and even I can admit that this is completely accurate.

load more comments (1 replies)
[–] Cenzorrll@lemmy.world 2 points 11 months ago (2 children)

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

This is pretty much what I've started doing. Containers have the wonderful benefit that if you don't like it, you just delete it. If you install on bare metal (at least in Linux) you can end up with a lot of extra packages getting installed and configured that could affect your system in the future. With containers, all those specific extras are bundled together and removed at the same time without having any effect on your base system, so you're always at your clean OS install.

I will also add an irritation with docker containers as well, if you create something in a container that isn't kept in a shared volume, it gets destroyed when starting the container again. The container you use keeps the maintainers setup, for instance I do occasional encoding of videos in a handbrake container, I can't save any profiles I make within that container because it will get wiped next time I restart the container since it's part of the container, not on any shared volume.

load more comments (2 replies)
[–] Professorozone@lemmy.world 2 points 11 months ago (7 children)

I've never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn't know which one. This shows up as lemmy.world but that wasn't an option.

Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I'm hijacking. Maybe someone could just post a link or something.

load more comments (7 replies)

Docker is a set of tools, that make it easier to work with some features of the Linux kernel. These kernel features allow several degrees of separating different processes from each other. For example, by default each Docker container you run will see its own file system, unable to interact (read: mess) with the original file system on the host or other Docker container. Each Docker container is in the end a single executable with all its dependencies bundled in an archive file, plus some Docker-related metadata.

[–] InvertedParallax@lemm.ee 2 points 11 months ago* (last edited 11 months ago) (1 children)

This thread:

Jails make docker look like windows 11 with copilot.

load more comments (1 replies)
[–] DieserTypMatthias@lemmy.ml 2 points 11 months ago

It's the platform that runs all of your services in containers. This means they are separated from your system.

Also what are other services that might be interesting to self host in The future?

Nextcloud, the Arr stack, your future app, etc etc.

[–] zer0squar3d@lemmy.dbzer0.com 2 points 11 months ago

Now compare Docker vs LXC vs Chroot vs Jails and the performance and security differences. I feel a lot of people here are biased without knowing the differences (pros and cons).

load more comments
view more: next ›