home.social

#rancher — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #rancher, aggregated by home.social.

  1. Lets build our own #cloud #onprem with #suse #harvester and #rancher and #longhorn ... Much to learn i still have ... This is just the beginning!

  2. Lets build our own #cloud #onprem with #suse #harvester and #rancher and #longhorn ... Much to learn i still have ... This is just the beginning!

  3. Lets build our own #cloud #onprem with #suse #harvester and #rancher and #longhorn ... Much to learn i still have ... This is just the beginning!

  4. Lets build our own #cloud #onprem with #suse #harvester and #rancher and #longhorn ... Much to learn i still have ... This is just the beginning!

  5. Lets build our own #cloud #onprem with #suse #harvester and #rancher and #longhorn ... Much to learn i still have ... This is just the beginning!

  6. Vandaag #Debezium-corvee. Deze logical replication tool pompt op basis van events data over vanuit twee databases naar een derde. Alle databases worden binnen #Rancher als 1 master en 2 slaves in #pods ondergebracht. #Kubernetes brengt die naar eigen inzicht down en daarna weer up. Het correcte replication slot, op basis waarvan Debezium onthoudt waar hij gebleven was, wordt zo bij wijze van balletje-balletje heen en weer geschoven en hij blijkt dan steeds onder het verkeerde bekertje te kijken.

  7. Vandaag #Debezium-corvee. Deze logical replication tool pompt op basis van events data over vanuit twee databases naar een derde. Alle databases worden binnen #Rancher als 1 master en 2 slaves in #pods ondergebracht. #Kubernetes brengt die naar eigen inzicht down en daarna weer up. Het correcte replication slot, op basis waarvan Debezium onthoudt waar hij gebleven was, wordt zo bij wijze van balletje-balletje heen en weer geschoven en hij blijkt dan steeds onder het verkeerde bekertje te kijken.

  8. Vandaag #Debezium-corvee. Deze logical replication tool pompt op basis van events data over vanuit twee databases naar een derde. Alle databases worden binnen #Rancher als 1 master en 2 slaves in #pods ondergebracht. #Kubernetes brengt die naar eigen inzicht down en daarna weer up. Het correcte replication slot, op basis waarvan Debezium onthoudt waar hij gebleven was, wordt zo bij wijze van balletje-balletje heen en weer geschoven en hij blijkt dan steeds onder het verkeerde bekertje te kijken.

  9. Vandaag #Debezium-corvee. Deze logical replication tool pompt op basis van events data over vanuit twee databases naar een derde. Alle databases worden binnen #Rancher als 1 master en 2 slaves in #pods ondergebracht. #Kubernetes brengt die naar eigen inzicht down en daarna weer up. Het correcte replication slot, op basis waarvan Debezium onthoudt waar hij gebleven was, wordt zo bij wijze van balletje-balletje heen en weer geschoven en hij blijkt dan steeds onder het verkeerde bekertje te kijken.

  10. Very excited to have gotten through a pretty big lift + refactor in the #homelab over the last few weeks. For a while, I’ve had one foot in my old, disorganized, scattered Docker services, and one foot in a new, fully automated, #gitops pseudo-prod for home.

    Well, no more. I’ve cleaned up all of the old shit. Centralized config, linted, backed up, refactored, and smoke tested the core platform. It completely spins up and down, with one Playbook, data fully backed up, etc.

    Hardware:
    - 3x #amd mini PC’s w/32GB RAM
    - #synology DS1813+
    - 1x Intel miniPC for #homeassistant OS
    - TP-Link #Omada gateway, PoE switch, and AP’s

    My core stack is, going from hardware to workload lifecycle and access:
    - #proxmox
    - #fedora #fedora_coreos VM’s
    - #k3s #Rancher #kubernetes
    - #nfs subdir provisioner
    - #ansible
    - #argocd
    - #Tailscale
    - #1password Kubernetes Operator
    - #velero

    I just did a DR drill with my kids’ #minecraft servers, and it worked great, and I’m super pumped with #selfhosting #paperlessngx!

    Those are the first family-facing apps that I’ve deployed, and I’ve got a laundry list of more services on the way.

    It was far, far more work than it probably needed to be…but I also can fully trust it with our most important personal docs, which was priority one.

  11. I finished my Hack Week 25 project "SUSE Virtualization (Harvester): VM Import UI flow"!

    Currently, migrating VMs requires writing Kubernetes manifests.
    So I thought, wouldn't it be cool to do this through the Harvester UI directly?
    Who knows, maybe you'll see this feature in a future release ;)

    Read more on my blog: dominik.wombacher.cc/posts/hac

  12. I finished my Hack Week 25 project "SUSE Virtualization (Harvester): VM Import UI flow"!

    Currently, migrating VMs requires writing Kubernetes manifests.
    So I thought, wouldn't it be cool to do this through the Harvester UI directly?
    Who knows, maybe you'll see this feature in a future release ;)

    Read more on my blog: dominik.wombacher.cc/posts/hac

    #SUSE #HackWeek #Harvester #VMImport #UI #OpenSource #VueJS #Rancher #RancherExtensions #Frontend

  13. Updated #Orked, my collection of scripts to help set up a production-ready #RKE2 #Kubernetes cluster in your #homelab. This update brings general improvements to the scripts, improved documentation, #HAProxy load balancer support for load balancing multiple Master nodes, and upgraded all components including RKE2, #Longhorn, #Nginx Ingress, #Cert-manager, #MetalLB, #Rancher, etc. to their latest versions.

    I still hope someday to support more Kubernetes
    distributions like #k3s, but haven't gotten around to it. I've also been planning to support more #Linux distros as the base too, instead of only #RockyLinux/#RHEL, but that'll have to wait as well for now. Regardless, I am quite happy with how mature and stable these scripts have turned out to be. If you'd like to set up a cluster of your own, maybe check it out!

    🔗 https://github.com/irfanhakim-as/orked

    🔗 https://github.com/irfanhakim-as/orked/pull/41

  14. Hmm my services are running fine as far as I can tell, but my #Rancher/#RKE2 #Kubernetes cluster is acting up - possibly #etcd related?

    Biggest tell being how the control plane/API server not being the most responsive, and some essential pods failing/restarting including
    #cert-manager, cloud-controller-manager, csi-smb-controller, kube-apiserver, kube-scheduler, rke2-snapshot-controller, csi-provisioner + -resizer, -snapshotter, yadda yadda.

    Not sure what could be causing it just yet.

  15. Angenommen ich habe einen Host miteinem halben Dutzend #Docker Containern. Ich hätte den Host (also eigentlich die Container) gerne hochverfügbar.
    Welche einfachen Lösungen bieten sich an?
    Kubernetes wäre scheinbar overkill.
    #Rancher? #K3s? #Portainer? Oder klassisch mit #Linux-HA und #DRBD? Oder ganz anders?
  16. #Rancher/#RKE2 #Kubernetes cluster question - I don't need Rancher, but in the past with my RKE2 clusters, I normally deploy Rancher on a single VM using #Docker just for the sake of having some sort of UI for my cluster(s) if need be - with this setup, I'm relying on importing the downstream (RKE 2) cluster(s) onto said Rancher deployment. That worked well.

    This time round though, I tried deploying Rancher on the cluster itself, instead of an external VM, using
    #Helm. Rancher's pretty beefy and heavy to deploy even with a single replica, and from my limited testing I found that it's easier to deploy when your cluster's pretty new and not have much resources running just yet.

    What I'm curious about tho are these errors - my cluster's fine, and I'm not seeing anything wrong with it, but ever since deploying it a few days ago, I'm constantly seeing these
    Liveness/Readiness probe failed error on all 3 of my Master nodes (periodically most of the time, not all at once) - the same error also seems to include etcd failed: reason withheld. What does it mean, and how do I "address" it?

  17. How do you update #Longhorn's Node Drain Policy on a #Kubernetes/#RKE2 cluster? I think you could do it on the UI, but in this test cluster I'm experimenting with, I did not install #Rancher or "attach" this cluster to one so I don't have access to the UI.

    I'm trying to update said policy to
    allow-if-replica-is-stopped, and see if that would solve the errors I'm getting draining nodes in my cluster: Cannot evict pod as it would violate the pod's disruption budget.

    Update: nvm got it
    https://longhorn.io/docs/1.7.2/advanced-resources/deploy/customizing-default-settings/#using-kubectl

    Didn't solve my error though.

  18. Тестирование производительности Docker клиентов для Mac

    Недавно я опубликовал статью OrbStack: Почему я забыл про Docker Desktop , которая вызвала оживленную дискуссию в комментариях. Основные вопросы возникли вокруг производительности различных Docker-подобных решений. Мои аргументы, основанные в первую очередь на личном опыте использования, оказались недостаточно убедительными. Чтобы получить объективную картину и предоставить сообществу реальные данные, я решил разработать комплексный benchmark для сравнения различных решений. В процессе разработки тестов, комментаторы предложили несколько интересных идей, которые помогли расширить список тестируемых движков.

    habr.com/ru/articles/854832/

    #docker #docker_desktop #orbstack #podman #rancher #colima

  19. Оркестраторы в российском DevOps в 2024: рост использования Managed Kubernetes и отечественных дистрибутивов

    В конце августа 2024 года вышел подробный отчёт о состоянии DevOps в России. В статье мы расскажем, как себя чувствуют в российском DevOps технологии оркестрации: что среди них лидирует, какие дистрибутивы Kubernetes сейчас используются и какие критерии влияют на выбор оркестратора.

    habr.com/ru/companies/flant/ar

    #devops #kubernetes #k8s #оркестрация #контейнеризация #deckhouse #исследование_рынка #openshift #rancher

  20. #SUSE wants a piece of the #AI cake, too
    The company's recent fork of #CentOS has attracted a significant number of users, and its portfolio, including #Kubernetes service #Rancher and security service #Neuvector, positions SUSE well in a market where enterprises are looking to consolidate platforms. Despite ownership changes over the years, SUSE remains committed to expanding its presence in the #US market, and long a household name in IT circles in #Europe.
    techcrunch.com/2024/06/18/suse

  21. Diese Woche fand die #SUSECON24 in Berlin statt und daher gibt es heute ein zweiteiliges Special bei #FocusOnLinux.

    Im ersten Teil besprechen wir in großer Runde die zahlreichen Ankündigungen, u.a. #SLES 15 SP6 und 16, SUSE Linux Micro 6.0 und SUSE Manager 5.0. Auch im #Harvester-, #Rancher- und #NeuVector-Umfeld gibt es News, die besprochen werden wollen.

    🎧 focusonlinux.podigee.io/107-su

    #Podcast #SUSE #Linux #SUSECON

  22. Diese Woche fand die #SUSECON24 in Berlin statt und daher gibt es heute ein zweiteiliges Special bei #FocusOnLinux.

    Im ersten Teil besprechen wir in großer Runde die zahlreichen Ankündigungen, u.a. #SLES 15 SP6 und 16, SUSE Linux Micro 6.0 und SUSE Manager 5.0. Auch im #Harvester-, #Rancher- und #NeuVector-Umfeld gibt es News, die besprochen werden wollen.

    🎧 focusonlinux.podigee.io/107-su

    #Podcast #SUSE #Linux #SUSECON

  23. Diese Woche fand die #SUSECON24 in Berlin statt und daher gibt es heute ein zweiteiliges Special bei #FocusOnLinux.

    Im ersten Teil besprechen wir in großer Runde die zahlreichen Ankündigungen, u.a. #SLES 15 SP6 und 16, SUSE Linux Micro 6.0 und SUSE Manager 5.0. Auch im #Harvester-, #Rancher- und #NeuVector-Umfeld gibt es News, die besprochen werden wollen.

    🎧 focusonlinux.podigee.io/107-su

    #Podcast #SUSE #Linux #SUSECON

  24. Diese Woche fand die #SUSECON24 in Berlin statt und daher gibt es heute ein zweiteiliges Special bei #FocusOnLinux.

    Im ersten Teil besprechen wir in großer Runde die zahlreichen Ankündigungen, u.a. #SLES 15 SP6 und 16, SUSE Linux Micro 6.0 und SUSE Manager 5.0. Auch im #Harvester-, #Rancher- und #NeuVector-Umfeld gibt es News, die besprochen werden wollen.

    🎧 focusonlinux.podigee.io/107-su

    #Podcast #SUSE #Linux #SUSECON

  25. Diese Woche fand die #SUSECON24 in Berlin statt und daher gibt es heute ein zweiteiliges Special bei #FocusOnLinux.

    Im ersten Teil besprechen wir in großer Runde die zahlreichen Ankündigungen, u.a. #SLES 15 SP6 und 16, SUSE Linux Micro 6.0 und SUSE Manager 5.0. Auch im #Harvester-, #Rancher- und #NeuVector-Umfeld gibt es News, die besprochen werden wollen.

    🎧 focusonlinux.podigee.io/107-su

    #Podcast #SUSE #Linux #SUSECON

  26. Okay, I think I found a decent solution for my future 2-node #RaspberryPi #HomeLab cluster:

    - install #K3s using @technotim playbooks: technotim.live/posts/k3s-etcd-
    - monitor the cluster using @carlosedp stack: github.com/carlosedp/cluster-m
    - webUI using the standard #Kubernetes dashboard: kubernetes.io/docs/tasks/acces
    - CLI using #K9s: k9scli.io/

    Net-result: 20% memory usage vs 50% with #Rancher.

    Yep, I think that should make it a nice start. Let's see how it goes. 😉

  27. @delfuego @TomSellers @electronjs After using the above command and pulling down any available updates, I still have the following vulnerable #Electron-based apps on my #Mac:

    #AdGuard for #Safari (Electron v18.3.15)
    #HTTPie (21.2.0)
    #Jabra Direct (16.2.2)
    #Keybase (22.1.0)
    #Logitech Logi Options+ (23.1.1)
    #Logseq (24.6.3)
    #Postman (18.3.5) @getpostman
    #Rancher Desktop (20.3.8)
    #Microsoft Teams (19.1.8)
    #WhatsApp (13.6.9)

  28. I think I've finally tamed this #OpenSearch setup on this #Rancher #RKE2 cluster. Today's adventure was schema conflicts. Pods labeled with "app" while others are labeled with app.kubernetes.io cause a problem for inputs as it looks to OpenSearch like there's a string where an object should be and the flatten hashes on the #fluentd output wasn't quite enough to cut it, but the dedot filter brought it in the rest of the way there.

  29. Just closed 176 tabs in #Firefox. The upside is I have #Rancher and #OpenSearch playing well together at the moment via some #fluentd magic. 🥳 :apartyblobcat: 🎉