home.social

#metallb — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #metallb, aggregated by home.social.

  1. I've been working on the infrastructure for Launchpad023. Everything declaratively configured because servers should be cattle not pets.

    I've put it up on Codeberg if anyone wants to take a look.
    codeberg.org/launchpad023/laun

    @launchpad023

    #kluctl #kustomize #metallb #talosLinux #selfhosting #kubernetes #envoy_gateway #externalsecrets #stalwart #JMAIL #haarlem

  2. I've been working on the infrastructure for Launchpad023. Everything declaratively configured because servers should be cattle not pets.

    I've put it up on Codeberg if anyone wants to take a look.
    codeberg.org/launchpad023/laun

    @launchpad023

    #kluctl #kustomize #metallb #talosLinux #selfhosting #kubernetes #envoy_gateway #externalsecrets #stalwart #JMAIL #haarlem

  3. I've been working on the infrastructure for Launchpad023. Everything declaratively configured because servers should be cattle not pets.

    I've put it up on Codeberg if anyone wants to take a look.
    codeberg.org/launchpad023/laun

    @launchpad023

    #kluctl #kustomize #metallb #talosLinux #selfhosting #kubernetes #envoy_gateway #externalsecrets #stalwart #JMAIL #haarlem

  4. I've been working on the infrastructure for Launchpad023. Everything declaratively configured because servers should be cattle not pets.

    I've put it up on Codeberg if anyone wants to take a look.
    codeberg.org/launchpad023/laun

    @launchpad023

    #kluctl #kustomize #metallb #talosLinux #selfhosting #kubernetes #envoy_gateway #externalsecrets #stalwart #JMAIL #haarlem

  5. I've been working on the infrastructure for Launchpad023. Everything declaratively configured because servers should be cattle not pets.

    I've put it up on Codeberg if anyone wants to take a look.
    codeberg.org/launchpad023/laun

    @launchpad023

    #kluctl #kustomize #metallb #talosLinux #selfhosting #kubernetes #envoy_gateway #externalsecrets #stalwart #JMAIL #haarlem

  6. Had an incredible session with #ClaudeCode yesterday debugging why my #homelab #k8s ingress was having trouble after every restart.
    It went through my #argo/#helm config, used cli tools like arp etc, ran #netshoot on specific nodes etc to finally diagnose it as an #metallb stale arp cache problem.

    Just watching it go through the diagnostics was sometimes so educative!
    Using claude code (or any llm client agent for that matter) for infra troubleshooting is underrated!

  7. Updated #Orked, my collection of scripts to help set up a production-ready #RKE2 #Kubernetes cluster in your #homelab. This update brings general improvements to the scripts, improved documentation, #HAProxy load balancer support for load balancing multiple Master nodes, and upgraded all components including RKE2, #Longhorn, #Nginx Ingress, #Cert-manager, #MetalLB, #Rancher, etc. to their latest versions.

    I still hope someday to support more Kubernetes
    distributions like #k3s, but haven't gotten around to it. I've also been planning to support more #Linux distros as the base too, instead of only #RockyLinux/#RHEL, but that'll have to wait as well for now. Regardless, I am quite happy with how mature and stable these scripts have turned out to be. If you'd like to set up a cluster of your own, maybe check it out!

    🔗 https://github.com/irfanhakim-as/orked

    🔗 https://github.com/irfanhakim-as/orked/pull/41

  8. Kubernetes: шестимесячный марафон по прокачке dBrain.cloud

    За последние полгода наши инженеры проделали значительную работу по развитию и оптимизации платформы dBrain.cloud , ключевым инструментом которой является Kubernetes. В этой статье мы подробно расскажем об изменениях, произошедших в инфраструктуре, о внедрении новых функций и фишках, которые, мы надеемся, будут интересны DevOps-сообществу.

    habr.com/ru/companies/dbraincl

    #gpu #slicing #s3 #cilium #metallb #ipam #cgroup_v2 #kubernetes #dbrain #devops

  9. #Talos #kubernetes single node cluster up and running with #Calico, #MetalLB, #Traefik and a test #whoami deployment

    In no way scientific but it feels much more responsive that #microk8s

    Next step to rebuild a clean node and then migrate some services

  10. #Talos #kubernetes single node cluster up and running with #Calico, #MetalLB, #Traefik and a test #whoami deployment

    In no way scientific but it feels much more responsive that #microk8s

    Next step to rebuild a clean node and then migrate some services

  11. #Talos #kubernetes single node cluster up and running with #Calico, #MetalLB, #Traefik and a test #whoami deployment

    In no way scientific but it feels much more responsive that #microk8s

    Next step to rebuild a clean node and then migrate some services

  12. #Talos #kubernetes single node cluster up and running with #Calico, #MetalLB, #Traefik and a test #whoami deployment

    In no way scientific but it feels much more responsive that #microk8s

    Next step to rebuild a clean node and then migrate some services

  13. #Talos #kubernetes single node cluster up and running with #Calico, #MetalLB, #Traefik and a test #whoami deployment

    In no way scientific but it feels much more responsive that #microk8s

    Next step to rebuild a clean node and then migrate some services

  14. Those who've been reading my toots, might have picked up on the fact that I'm building a #kubernetes cluster from scratch (yes, I like pain). After figuring out #cri_o #calico #certmanager #metallb #traefik and #cloudnativepg I finally deployed my first actual application: #nextcloud ! Wueeh! Extremely stocked! Now I need to figure out how I rope in my ZFS box for persistence, and then I'm ready for a deployment in testing! #k8s #selfhosting

  15. I've just merged a huge PR to my #Orked (O-tomated RKE Distribution - GREAT NAME I KNOW) that makes it easier than ever for anyone to set up a production-ready #RKE2 #Kubernetes cluster in their #homelab.

    With this collection of scripts, all you need to do is just provision the nodes required, including a login/management node, and run the scripts right from the login node to configure all of the other nodes to make up the cluster. This setup includes:

    - Configuring the Login node with any required or essential dependencies (such as
    #Helm, #Docker, #k9s, #kubens, #kubectx, etc.)

    - Setup passwordless
    #SSH access from the Login node to the rest of the Kubernetes nodes

    - Update the
    hosts file for strictly necessary name resolution on the Login node and between the Kubernetes nodes

    - Necessary, best practice configurations for all of the Kubernetes nodes including networking configuration, disabling unnecessary services, disabling swap, loading required modules, etc.

    - Installation and configuration of RKE2 on all the Kubernetes nodes and joining them together as a cluster

    - Installation and configuration of
    #Longhorn storage, including formatting/configuring their virtual disks on the Worker nodes

    - Deployment and configuration of
    #MetalLB as the cluster's load-balancer

    - Deployment and configuration of
    #Ingress #NGINX as the ingress controller and reverse proxy for the cluster - this helps manage external access to the services in the cluster

    - Setup and configuration of
    #cert-manager to obtain and renew #LetsEncrypt certs automatically - supports both #DNS and HTTP validation with #Cloudflare

    - Installation and configuration of
    #csi-driver-smb which adds support for integrating your external SMB storage to the Kubernetes cluster

    Besides these, there are also some other
    helper scripts to make certain related tasks easy such as a script to set a unique static IP address and hostname, and another to toggle #SELinux enforcement to on or off - should you need to turn it off (temporarily).

    If you already have an existing RKE2 cluster, there's a step-by-step guide on how you could use it to easily configure and join additional nodes to your cluster if you're planning on expanding.

    Orked currently expects and supports
    #RockyLinux 8+ (should also support any other #RHEL distros such as #AlmaLinux), but I am planning to improve the project over time by adding more #Linux distros, #IPv6 support, and possibly even #K3s for a more lightweight #RaspberryPi cluster for example.

    I've used this exact setup to deploy and manage vital services to hundreds of unique clients/organisations that I've become
    obsessed with sharing it to everyone and making it easier to get started. If this is something that interests you, feel free to check it out!

    If you're wondering what to deploy on a Kubernetes cluster - feel free to also check out my
    #mika helm chart repo 🥳

    🔗 https://github.com/irfanhakim-as/orked

    🔗 https://github.com/irfanhakim-as/charts

  16. There, now the services are on #MetalLB IPs, and the #ReverseProxy forwards rukii.net to those. Works perfectly, #ZeroDowntime.

    Now everything else is #HighAvailability, except the persistent volumes. For #OpenEBS I'll need a third cluster node, which is in the mail... And of course the reverse proxy and the internet connections aren't redundant yet, in principle I could set up another internet connection e.g. over 4G, but for now the fiber and the proxy are reliable enough.