Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    30 days ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • OnfireNFS@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

    It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

    Probably doesn’t work for everyone but it works for me

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    27 days ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

  • Magiilaro@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

    Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…

  • enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    1
    ·
    28 days ago

    I’m using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

    VMs were never really an option for me because the overhead is too high for the low power machines I use – my entire empire of dirt doesn’t have any fans, it’s all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

    Stuff like docker I didn’t like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn’t intended for tinkering or anything. You aren’t supposed to build from source in docker as far as I can tell.

    The nice thing about proxmox’s lxc implementation is I can hop in and change things or fix things as I desire. It’s all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      28 days ago

      Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    It’s just another system to maintain, another link in the chain that can fail.

    I run all my services on my personal gaming pc.

  • Billegh@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    It depends on the service and the desired level of it stack.

    I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn’t really suitable for the task.

    At work, I run services in docker in VMs because the benefits far outweigh the complexity.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 days ago

    KISS

    The more complicated the machine the more chances for failure.

    Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.

    Depending on your use case that could be very important

  • yessikg@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    29 days ago

    It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

  • missfrizzle@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

    and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

    until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

    /uj not really but that’d be sick as hell.

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    Containerisation is all the rage, but in reality it’s not needed at all for all but a tiny number of self hosters. If a native program option exists, it’s generally just easier and more performant to use that.

    Docker and the like shine when you’re frequently deploying and destroying. If you’re doing that with your home server you’re doing it very wrong.

    I like docker, I use it on my server, but I am more and more switching back to native apps. There’s just zero advantage to running most things in docker.

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

  • SmokeyDope@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

    As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.

  • medem@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    1
    ·
    29 days ago

    The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.