home.social

#drbd — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #drbd, aggregated by home.social.

  1. Next level in my Homelab: A storage cluster with linbit drbd. Should run on the same nodes as proxmox pve. And of course defined in some ansible scripts 🤣
    Currently, this is still running on a test cluster that I set up using OpenTofu on top of the current pve cluster.
    But there are still some "hickups"

    #homelab #linbit #drbd #proxmox #pve #ansible #OpenTofu #TerraForm

  2. Angenommen ich habe einen Host miteinem halben Dutzend #Docker Containern. Ich hätte den Host (also eigentlich die Container) gerne hochverfügbar.
    Welche einfachen Lösungen bieten sich an?
    Kubernetes wäre scheinbar overkill.
    #Rancher? #K3s? #Portainer? Oder klassisch mit #Linux-HA und #DRBD? Oder ganz anders?
  3. @ij ach Kuck an. Lange nicht mehr „Open-e“ gehört. Hatte das in der Uni (2011?) im Einsatz, für #proxmox. Hatten auch so manches Problem damit, vor allem wenn sich die Conf von dem internen #DRBD zerschossen hat. Aber es lief recht lange, bis mein Nachfolger es sinnigerweise durch #ceph ersetzt hat.

  4. Just remembered the time 2.6.18 was the #Linux kernel we used a long time with #Xen and #DRBD. At that time we had 32bit and 64bit in production and It was rock stable for quite some time.

  5. For a new e-mail cluster that will eventually consist of a number of #IMAP servers, I need some shared storage that all servers can read from and write to at the same time. I have some experience with #DRBD, but I was told that DRBD isn't going to be the solution for what I want.

    It will start small, with only several hundreds of mailboxes, but it should be able to scale up to many thousands or even hundreds of thousands in the distant future. What I want is a variable number of #Dovecot servers with an HA Director in front of them, so that I can upgrade and reboot individual nodes without users noticing.

    #NFS would -on paper- be ideal, but when working with lots and lots of small files it gets very slow. #ZFS is pretty cool, but that wasn't designed as a cluster filesystem and I'm not sure if it can be made to do that reliably. I find #GlusterFS and #GFS2 in many articles, but I don't have any experience with those. I have a bit of experience with #Ceph, just enough to know that I don't want that.

    What do you guys think, what is the system I should go for? And, or course, why? Did I overlook systems that are worthwhile?
  6. heise-Angebot: storage2day 2021: Open-Source-Speicher im Rechenzentrum

    Einen Tag lang weiterbilden zu Open-Source-Speicher auf Commodity-Hardware – damit startet die Online-Konferenz storage2day ihr Programm 2021.
    storage2day 2021: Open-Source-Speicher im Rechenzentrum