Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.
But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!
In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.
…oh shit, the RAM is on fire.
The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.
Burn mothercucker, burn.
(Thanks phone for the spelling mistakes that I’m leaving).
Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.
Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration
I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run
sudo apt install immich vaultwarden, just like I can dosudo apt install qbittorrent-noxtoday? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build
It’s not just libraries in a docker
pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.
and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.
until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.
/uj not really but that’d be sick as hell.
Erm. I’d just say there’s no benefit in adding layers just for the sake of it.
It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.
Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.
The only constant is change.
“What is stopping you from” <- this is a loaded question.
We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.
I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.
tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.
Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!
What is stopping you from running HP-UX for all your workloads? The question is totally in purpose so that you’ll fill in what it means to you.
Pure bare metal is crazy to me. I run proxmox and mount my storage there, and from there it is shared to machines that need it. It would be convenient to do a pass through to TrueNAS, for some of the functions it provides but I don’t trust that my skills for that. I’d have kept TrueNAS on bare metal, but I need so little horsepower for my services that it would be a waste. I don’t think the trade offs of having TrueNAS run my virtualisation environment were really worth it.
My router is bare metal. It’s much simpler to handle the networking with a single physical device like that. Again, it would be convenient to set up opnsense in a VM for failover. but it introduces a bunch of complexity I don’t want or really need. The router typically goes down only for maintenance, not because it crashed or something. I don’t have redundant power or ISPs either.
To me, docker is an abstraction layer I don’t need. VMs are good enough, and proxmox does a good job with LXCs so far.
Why would I spin up a VM and virtual network within that vm and then a container when I can just spin up a VM?
I’ve not spent time learning Docker or k8s; it seems very much a tool designed for a scale that most companies don’t operate at let alone my home lab.
KISS
The more complicated the machine the more chances for failure.
Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.
Depending on your use case that could be very important
It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
Containerisation is all the rage, but in reality it’s not needed at all for all but a tiny number of self hosters. If a native program option exists, it’s generally just easier and more performant to use that.
Docker and the like shine when you’re frequently deploying and destroying. If you’re doing that with your home server you’re doing it very wrong.
I like docker, I use it on my server, but I am more and more switching back to native apps. There’s just zero advantage to running most things in docker.
Containers are as performant as a native program because they are native programs.
Nope. If you use docker containers on windows or mac, they’re running using an abstraction layer. Docker is the native app, but what’s running inside them isn’t. At best they are nearly identical in performance with negligible hit to performance, but as soon as you use things like port forwarding the performance takes a hit.
Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.
As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.
What OS are you using?
Linux Mint 22
I guess it isn’t the most user friendly process, but you can add the official Docker repo and get an up-to-date version without compiling or anything. You just want to make sure to uninstall any Docker packages that you installed before, before you start.
https://linuxiac.com/how-to-install-docker-on-linux-mint-22/
They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.
Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.
…because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷♂️
The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.
In my own experience, certain things should always be on their own dedicated machines.
My primary router/firewall is on bare metal for this very reason.
I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.
I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)
And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.
I didn’t see a point in removing it. So it’s there, just not automatically started.
Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.
My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”
My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.
If you’re talking about backups and updates for addons and core, that works on VMs as well.
For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.
I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option





