#xcp-ng — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #xcp-ng, aggregated by home.social.
-
Chez Korben: "Vates VMS - L'alternative française open source à VMware qui cartonne"
https://korben.info/vates-alternative-vmware-open-source.html
Alors oui, c'est un article sponsorisé, mais en tant que responsable moi-même depuis 7 ans de XCP-ng, l'hyperviseur de cette stack, je confirme le contenu. Et c'est du "vrai" libre.
Si ça peut aider à faire connaître, pour éviter ce moment génant où un proche me demande "tu connais #proxmox" sans savoir que je bosse sur XCP-ng, ni que ça existe...
-
At @bawuenet we're in the process of upgrading our #Xen #xcpng hypervisors to #uefi boot. Of course, we automate things and want to be able to reinstall our hypervisors from network boot. Our EFI netboot system is based on #ipxe which means it can only load EFI binaries. But xcp-ng ships the Xen kernel as a multiboot2 based image.
As probably one of the most weird ways to solve this problem, I've decided to chainload #grub2 from iPXE and learned a bit! 🧵 -
Morgen muss ich mein XCP-NG-Shirt anziehen, um wieder herunterzukommen von #AWS ESC und dem #BSI‑Thread, und ein Statement für eine großartige europäische Virtualisierungslösung von @vates abgeben. 😌
Wir können in Europa nämlich auch IT und so …
#vates #xcpng #virtualization #opensource #awsesc #digitalsovereignty #xen
-
@dzwiedziu You haven’t mentioned #xcpng with #xenorchestra . It’s pretty good. It’s what I use.
-
Today was the last move. It is all in one rack. It’s all halfway decent. I had a major downtime though because I had a stupid #xcpng goof that required upgrading a server that had live workloads on it. I couldn’t migrate them off. That sucked.
But it looks ok now. And it is so much more compact. I am using some air filter fabric over the front door to filter the air. And I have the front facing inside and the back blowing out toward the outside. I figure the air is cleaner on the house side of the garage.
-
I'm going to admit that I am doing something immature in my #homelab and I'm looking for opinions. I've got multiple #XCPng hosts, all using local storage. I have no NFS or iSCSI storage. That's kinda silly. Shared storage is super useful and I'm literally not using it.
Unless I go to some serious effort to make a high-performance SAN, I expect network storage performance to be so-so for VM storage, but maybe I'm too pessimistic. I currently only have copper gigabit in the rack. No fiber, no 2.5G copper or anything like that. I'm not sure if that's going to be viable for NFS or iSCSI.
I could dedicate a host to running TrueNAS Core with a bunch of storage. But what has always bugged me about this is that my storage host becomes a single point of failure for all the compute nodes. #TrueNAS is super reliable but everything has to reboot once in a while, and these stupid enterprise-grade servers take anywhere from 4-8 minutes to boot. If I had a single storage node, and I needed to reboot it for an OS upgrade, everything would hang for a while. That's no good. Not updating the OS on the storage system is also not good.
So what am I supposed to be doing for shared storage on a #Xen cluster? How do I avoid a storage host becoming a single point of failure? How do you update and reboot a storage node, without disrupting everything that depends on it?
-
I upgraded #xcpng and I just noticed that it has some IPMI integration. It's able to tell a few things about power and temperature and display it at the bottom. It just discovered all these attributes. I didn't do anything to turn that on.
-
Finally organized my thoughts into a blog post around the state of hypervisors in my home lab, with some new hardware. I also called out a few nice features of XCP-NG and Proxmox that I discovered while I was evaluating both of them.
https://medium.com/@a.j.longchamps/home-lab-hardware-refresh-september-2025-edition-3e50767b63e1
#homelab #hardware #hypervisors #ansible #automation #debian #kubernetes #learning #tinkering #proxmox #xcpng #minisforum
-
I've been on a little bit of a writing spree. I decided to update my guide on deploying K8s on XCP-NG/XenOrchestra with Terraform and Ansible.
I wrote one a little while back but done it quite quick and didn't follow a lot of best practices. I have since re-written a bunch of it and you can find it here: https://godfrey.online/posts/xen_k8s_ansible_terraform/
-