home.social

#kubeadm — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #kubeadm, aggregated by home.social.

  1. Kubernetes 1.29 → 1.33 за 30 минут: реальный апгрейд кластера с помощью ИИ под контролем инженера

    Мы проверили, способен ли ИИ участвовать в реальной инфраструктурной операции повышенного риска — обновлении Kubernetes-кластера сразу через несколько minor-версий. Речь не про «сгенерировать YAML» или «написать Helm-чарт», а про полноценную операцию:

    habr.com/ru/articles/981874/

    #kubernetes #devops #искусственный_интеллект #ai #обновление_кластера #kubeadm #infrastructure #sre #automation

  2. Kubernetes-Cluster „einfach“ erklärt

    Warum du Kubernetes-Cluster kennen solltest Stell dir vor, du bist mitten in deiner Sysadmin-Ausbildung und hast Linux-Grundlagen wie Befehle, Dateisysteme und Prozesse im Griff, aber Container und Orchestrierung klingen noch fremd. Kubernetes-Cluster sind der nächste Schritt: Sie helfen dir, viele Anwendungen automatisch auf mehreren Linux-Servern zu starten, zu überwachen und zu reparieren, ohne dass du alles manuell per SSH machen musst. Das spart Zeit und verhindert Ausfälle, […]

    andreas-moor.de/kubernetes-clu

  3. We know #TalosLinux is 🤏 but is it really the smallest?

    We ran the tests. We’ve got the data. Check it out if you like numbers.

    Watch → youtu.be/atPvnJMGdfs
    Read → siderolabs.com/blog/which-kube

    #Kubeadm #K3s #K0s #Kairos #RKE2 #Kubernetes #K8s

  4. so… 6h of “free time” after 8h shift was spent on #kubeadm deploy + #calico+#vxlan. did it work? yes. would i put it in production? hell no.

    less than zero ideas how to handle this fight for iptables between kube-proxy (which should be disabled, but cluster and calico just dead without it) and calico components :/

  5. Seems like my proxmox kubeadm cloud init kubernetes with k3s kine as etcd replacement is working quite nice. Just some more cleanups and this is a nice 2.0 release. A mirror of my local working version is available as dev-branch here: github.com/deB4SH/terraform-pr

    tags: #proxmox #kubeadm #kubernetes

  6. @gyptazy I’m thinking to abandon #k3s and wanted to try creating a cluster with #kubeadm… now I’m thinking maybe I could try #talos… 🤔

  7. Whats the recommended blood alcohol level for deploying #K3s, #Kubeadm, or similar?

    #Kubernetes

  8. Just a released a #terraform module to create #kubernetes clusters with #kubeadm . It's the first iteration. No highly available control planes. One control plane - multiple worker nodes - automatic joining of nodes. 😶‍🌫️

    repository for the module: github.com/deB4SH/terraform-pr
    small blog post: deb4sh.github.io/2024/07/19/20

  9. @heatsink I think they* also welcome PRs to improve all that. If you do Slack, there is a moderately active #kubeadm channel on the K8S Slack.

    No pressure but if you're comfortable making authoring a PR I'm sure it'll be super helpful!

    *"They" being SIG-Node and the kubeadm maintainers :)

  10. Spent 20 minutes hunting down a Kubernetes issue. #MetalLB wasn't advertising any L2 services, causing my Unifi devices to disconnect from the panel. Turns out its because of a node label added by #kubeadm as by default, which now MetalLB takes note of: github.com/metallb/metallb/iss

    For people who run master only setups, you'll need to make note of this. I should really move away from MetalLB now that I have a better understanding of HAProxy, I really don't need it.

  11. @Pacool @nerdeiro

    ...but less great for us, users, since we are facing choices that we don't understand when we're just trying to get started 🫤

    Just in case: don't hesitate to ask for help in the Kubernetes slack (#kubeadm channel) or even here. Many folks have suffered similar pains and will be glad to help!

  12. If you just felt a bump in the road, that was the @midgaard ingress-nginx being upgraded. Normally you wouldn't have noticed, but the ingress labels had unfortunately fallen off the ingress nodes, meaning #kubernetes had no idea where to run nginx. How did that happen? Somehow a #kubeadm upgrade last week stripped off the labels, and I forgot to stick them back on. Sorry about that. I think total downtime was a couple of minutes - five at most.

  13. A new "Adventures in Kubeland" episode just landed on Youtube.

    I would love some feedback on the format and how to improve it.

    #kubernetes #kubeadm #devops

    youtu.be/4-teFTKcp2A

  14. Ok, fourth server setup and adopted into my #proxmox cluster. I used #kubeadm to install #kubernetes on four VMs and I have a k8s cluster running. Now to get #nginx ingress working and install #helm. It's all coming together. Is anyone else running their #mastodon instance on k8s via a helm chart?