home.social

#esxi — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #esxi, aggregated by home.social.

  1. СХД, ESXi и LUN ID: теория, практика и немного боли

    Хабр, привет! На связи центр компетенций VMware "Инфосистемы Джет". В этой статье расскажем о тонкостях подключения LUN (логических номеров устройств) из системы хранения данных (СХД) к хостам ESXi, которые управляются через vCenter Server. За прошлый год мы не раз сталкивались с проблемами у заказчиков, которых можно было бы избежать при правильной настройке. Как именно — расскажем ниже, не забывая и про теорию.

    habr.com/ru/companies/jetinfos

    #vmware #esxi #lun #хост #логи #настройки

  2. I really love Immich, bringing back old memories day by day

    This is the first 'server' from circa 2015, at first an ESXi host for various tinkering, nowadays in a better looking box running OpenMediaVault and the data backbone of the home infrastructure

    What a journey

    #homelab #selfhosted #immich #memories #vmware #esxi #omv #openmediavault #linux #server #nas #hardwaregore

  3. For running a small home lab, there is almost always an old PC that can handle some version of VMware ESXi, especially the 6.x releases. Hardware compatibility is limited, but many inexpensive network cards can be made to work by adding the right VIB packages.

    A useful reference for community-created and legacy drivers is here:
    vibsdepot.v-front.de/wiki/inde

    • To list installed VIBs:

      • esxcli software vib list
    • To install a VIB:

      • If the VIB file is .zip format using the following command:
      • esxcli software vib install -d <full path to the zip file>
      • If the VIB file is .VIB format using the following command:
      • esxcli software vib install -v <full path to vib file>
    • To remove a VIB:

      • esxcli software vib remove --vibname=<insert_vib_name>
    • Check Network adapter:

      • ~ # lspci | grep net00:07:00.0 Network controller: D-Link System Inc DGE-528T Gigabit Ethernet Adapter

    #homelab #VMware #ESXi

  4. LockBit 5.0 als neue Ransomaware-as-a-Service

    Die neue LockBit-Variante in Version 5 funktioniert nun auch plattformübergreifend in Windows, Linux und VMware-ESXi-Umgebungen. TrendMicro hat die neuen Angriffstechniken analysiert und stellt fest:
    "Die Windows-, Linux- und ESXi-Varianten von LockBit 5.0 unterstreichen, dass kein Betriebssystem und keine Plattform als sicher vor modernen Ransomware-Kampagnen angesehen werden kann."
    Damit ermögliche die neue Crossplattform-Strategie Angreifenden gleichzeitige Angriffe auf das gesamte Unternehmensnetzwerk.
    Die Windows-Variante von LockBit 5.0 nutze eine starke Verschleierung und Komprimierung, indem sie ihre Payload über DLL-Reflexion lade und gleichzeitig Anti-Analyse-Techniken implementiere. Die Linux-Variante verfüge über ähnliche Funktionen mit Befehlszeilenoptionen für bestimmte Verzeichnisse und Dateitypen. Die ESXi-Variante ziele speziell auf die Virtualisierungsinfrastruktur von VMware ab und sei entwickelt worden, um virtuelle Maschinen zu verschlüsseln.
    Die neuen Varianten verwenden zufällige 16-stellige Dateierweiterungen, verfügen über eine Umgehung des russischsprachigen Systems und löschen nach der Verschlüsselung die Ereignisprotokolle.
    "Die starke Verschleierung dieser neuen Varianten verzögert die Entwicklung von Erkennungssignaturen erheblich, während technische Verbesserungen wie entfernte Infektionsmarker, schnellere Verschlüsselung und verbesserte Umgehungsmöglichkeiten LockBit 5.0 deutlich gefährlicher machen als seine Vorgänger."

    trendmicro.com/en_gb/research/

    #infosec #ransomware #raas #windows #linux #esxi #vmware #lockbit #BeDiS

  5. I may be wrong, but I think that #ESXi is the only whole product name-checked by #Mitre #Att&ck in the top-level techniques list. T1675

    They don't say "Hypervisor administration command" or anything generic like that. They call out #VMware "ESXi administration command" specifically. And that says a lot of bad things about the #security about that product.

  6. 🕷️ Scattered Spider is targeting VMware ESXi systems through social engineering alone. They impersonate help‑desk users to gain AD account resets, then elevate privileges to control vCenter and ESXi hosts. From there it’s disk‑swap, NTDS.dit exfiltration, backup sabotage, and ransomware, all in just hours. 🚨

    TL;DR
    ⚠️ ESXi hosts manage entire virtual environments and are now attack vectors
    🧠 Social‑engineering to bypass MFA and reset AD credentials
    🔐 Disk‑swap to steal NTDS.dit and cripple backups
    📈 Rapid ransomware across all VMs using root access

    bleepingcomputer.com/news/secu
    #ScatteredSpider #ESXi #VMware #IncidentResponse #security #privacy #cloud #infosec #cybersecurity #sysadmin

  7. I've finally completed most of the guides I was planning on adding to my #Homelab Wiki - now it's got guides on setting up #Portainer, #Immich, #Jellyfin, #ErsatzTV, #OpenMediaVault (#OMV), and even #HomeAssistant - all of these (besides Jellyfin and ErsatzTV, those are on #Proxmox) are hosted on my #RaspberryPi in my homelab.

    Most importantly though, I've organised the wiki a lil better - into different
    courses. The first course details the type of hardware you're going to want to assemble - a beefy server (with only consumer parts) or a mini server (i.e. an #SBC), or whether you'd like to deploy a #NAS, followed by a course to setting up and managing a hypervisor (including #ESXi, but really, use Proxmox - which is #FOSS and plain better).

    There's also a whole course on all sorts of 'host deployment environments' (i.e. where your application is hosted on, like
    #VM, #Docker, #Kubernetes, and #LXC) you could have in your homelab. (One of the) Most importantly, a course on networking - which covers valuable topics like setting up a domain, free or paid, and setting up a reverse proxy for serving your hosted applications publicly, securely.

    There's still some stuffs I gotta add, like a complete guide on setting up
    #TrueNAS (which I've set up for many years at this point, without much documentation on how I did it - so I gotta find an opp to replicate it, when I have extra hardware maybe), but I'm pretty happy with it at this point. If you're planning to get into homelabbing, or even if you're already in it - maybe check it out ​:blobfoxcat:​

    🔗 https://github.com/irfanhakim-as/homelab-wiki

    RE: https://sakurajima.social/notes/a9so79m6ze

  8. 🚨 The notorious hacker group #ScatteredSpider is aggressively targeting #VMware #ESXi hypervisors in US retail, airline & insurance sectors using advanced social engineering 🎭 to breach IT help desks & deploy ransomware from the hypervisor layer in just hours! ⏱️🔐 Read more: bleepingcomputer.com/news/secu #CyberSecurity #Ransomware #InfoSec
    #newz

  9. 🚨 New ransomware alert: BERT is shutting down VMware ESXi VMs before encrypting — making recovery a nightmare.

    RELIANOID helps stop these threats:
    ✅ mTLS protection on ESXi interfaces
    ✅ Layer 7 threat detection
    ✅ Hot restarts for faster response
    ✅ Real-time logs & anomaly alerts

    Stay secure. Stay proactive.

    relianoid.com/blog/bert-ransom

  10. A Question:

    So i saw the #SANS #ransomware video ("Shaking up the Ransomware Game: Introducing Scattered Spider") where they mentioned that visibility into Hypervisors suck and that actors exploit #ESXi like there is no tomorrow. And thought i'd give my own defensive stuff a go.

    My q is basically this:

    • Anyone know how to get their own binaries to run on ESXi? I tried my endpoint agent on #ProxMox and it ran just fine (attached pic), but ESXi implicitly prevents (unsigned) binaries from running.

    Got to the point of finding esxcli software acceptance set --level=CommunitySupported which would allow this, but then i had to create my own .vib package to install and there is where everything ended as i have no clue how to get something signed so, i'm asking if any of you guys have had any experience with this.

    Kind of hard to make develop stuff for ESXi if you have to sign every release you want to test.

  11. Tool for triage and analysis of ESXi logs, based on the timeline CSVs from QELP.
    Produces:
    - A combined timeline of Bash activity, logons and user activity
    - A timeline of logon events by type, along with a user/IP logon timeline
    - A summary of Bash history, network-tool usage and newly created users

    github.com/cudeso/tools/tree/m #DFIR #Logs #esxi

  12. Pulled the plug on #ESXi & back to running #Debian on my home lab "VM Host".

  13. I have an old NUC 11 that I think is failing. I run ESXi on it, the free version. It has 32GB of ram, 1x 1TB SSD and 1x 2TB SSD. Can anyone recommend a good replacement?

    I am sure a lot of people will also recommend Proxmox over ESXi, but I have to run some images that are only ESXi.

    Any suggestions would be most welcome

    #ESXi #IntelNUC #NUC #Proxmox

  14. I've successfully migrated my #ESXi #homelab server over to #Proxmox after surprisingly a little bit of (unexpected) trouble - haven't really even moved all of my old services or #Kubernetes cluster back into it, but I'd say the most challenging part I was expecting which is #TrueNAS has not only been migrated, but also upgraded from TrueNAS Core 12 to TrueNAS Scale 24.10 (HUGE jump, I know).

    Now then. I'm thinking what's the best way to move forward with this, now that I have 2 separate nodes running Proxmox. There are multiple things to consider. I suppose I could cluster 'em up, so I can manage both of them under one
    roof but from what I can tell, clustering on Proxmox works the same way as you would with Kubernetes clusters like #RKE2 or #K3s whereby you'd want at least 3 nodes, if not just 1. I can build another server, I have the hardware parts for it, but I don't think I'd want to take up more space I already do and have 3 PCs running 24/7.

    I'm also thinking of possibly joining my 2 RKE 2 Clusters (1 on each node) into 1... but I'm not sure how I'd go about it having only 2 physical nodes. Atm, each cluster has 1 Master node and 3 Worker nodes (VMs ofc). Having only 2 physical nodes, I'm not sure how I'd spread the number of master/worker nodes across the 2. Maintaining only 1 (joined) cluster would be helpful though, since it'd solve my current issue of not being able to use one of them to publish services online using
    #Ingress "effectively", since I could only port forward the standard HTTP/S ports to only a single endpoint (which means the secondary cluster will use a non-standard port instead i.e. 8443).

    This turned out pretty long - but yea... any ideas what'd be the "best" way of moving forward if I only plan to retain 2 Proxmox nodes - Proxmox wise, and perhaps even Kubernetes wise?

  15. A new version of check_esxi_hardware, an #opensource #monitoring plugin to monitor the hardware of VMware ESXi servers, is available.

    The newest release fixes a #Python deprecation warning. More importantly the new version removes support for legacy and meanwhile EOL versions of Python2 and pywbem 0.7.0.

    As the purpose of check_esxi_hardware is to query the #CIM server on an #ESXi server - which will be removed - this may be the last and final release of the plugin. 😿

    claudiokuenzler.com/blog/1455/

  16. For some reason, I feel like the #RKE2 cluster on my #Proxmox node is more fragile than the cluster on my #ESXi node. Like, on the latter, I can simply shutdown and boot the nodes however I want just like that and everything seems to just get in a working state on tis own. On the former, for some reason, things seem to boot in a non-running start with various status like Unknown, CrashLoopBackOff, etc. - some gets solved by me deleting/restarting the pods, some though will require me to run the killall script and reboot the entire node. Pretty weird, when both clusters were deployed/configured the exact same way and runs the exact same version.

  17. I think it's impressive that my #homelab server is still alive and well after 3 years and building it using cheap/used consumer (non-server) parts.

    The CPU especially is like
    SUPER solid - I bought a used #AMD Ryzen 7 1700 back in 2021 for 589.99 MYR (~126 USD) - it had 1 year warranty but meh, the seller def did not entertain my request to have it replaced when I received it with bent pins. I was ready to just give up on it but decided wth I'll just try to bend the pins right back and it's worked perfectly since then.

    I still have a Ryzen 7 3700X + an
    #MSI #B450 MATX board lying around after I had upgraded to a Ryzen 7 5700X + #ASUS #B550 ITX combo, thinking maybe I could upgrade the server with them, but idk if it's even needed. The only bottleneck on my server really is just the 4 DIMM slots on the mobo that have been maxed out to 64GB DDR4 RAM (which I also had bought for cheap) and so that CPU/Mobo upgrade won't help with that.

    I'm really glad I went this route cos having that homelab server has been a gamechanger to my lifestyle since I essentially have a perfect playground I could waste my precious time on. Now really I just need to devise a way to move from
    #VMware #ESXi to #Proxmox but I'm too lazy to figure out how I could do so without losing all of my VMs/#Kubernetes cluster.

  18. I think it's impressive that my #homelab server is still alive and well after 3 years and building it using cheap/used consumer (non-server) parts.

    The CPU especially is like
    SUPER solid - I bought a used #AMD Ryzen 7 1700 back in 2021 for 589.99 MYR (~126 USD) - it had 1 year warranty but meh, the seller def did not entertain my request to have it replaced when I received it with bent pins. I was ready to just give up on it but decided wth I'll just try to bend the pins right back and it's worked perfectly since then.

    I still have a Ryzen 7 3700X + an
    #MSI #B450 MATX board lying around after I had upgraded to a Ryzen 7 5700X + #ASUS #B550 ITX combo, thinking maybe I could upgrade the server with them, but idk if it's even needed. The only bottleneck on my server really is just the 4 DIMM slots on the mobo that have been maxed out to 64GB DDR4 RAM (which I also had bought for cheap) and so that CPU/Mobo upgrade won't help with that.

    I'm really glad I went this route cos having that homelab server has been a gamechanger to my lifestyle since I essentially have a perfect playground I could waste my precious time on. Now really I just need to devise a way to move from
    #VMware #ESXi to #Proxmox but I'm too lazy to figure out how I could do so without losing all of my VMs/#Kubernetes cluster.

  19. I think it's impressive that my #homelab server is still alive and well after 3 years and building it using cheap/used consumer (non-server) parts.

    The CPU especially is like
    SUPER solid - I bought a used #AMD Ryzen 7 1700 back in 2021 for 589.99 MYR (~126 USD) - it had 1 year warranty but meh, the seller def did not entertain my request to have it replaced when I received it with bent pins. I was ready to just give up on it but decided wth I'll just try to bend the pins right back and it's worked perfectly since then.

    I still have a Ryzen 7 3700X + an
    #MSI #B450 MATX board lying around after I had upgraded to a Ryzen 7 5700X + #ASUS #B550 ITX combo, thinking maybe I could upgrade the server with them, but idk if it's even needed. The only bottleneck on my server really is just the 4 DIMM slots on the mobo that have been maxed out to 64GB DDR4 RAM (which I also had bought for cheap) and so that CPU/Mobo upgrade won't help with that.

    I'm really glad I went this route cos having that homelab server has been a gamechanger to my lifestyle since I essentially have a perfect playground I could waste my precious time on. Now really I just need to devise a way to move from
    #VMware #ESXi to #Proxmox but I'm too lazy to figure out how I could do so without losing all of my VMs/#Kubernetes cluster.

  20. I think it's impressive that my #homelab server is still alive and well after 3 years and building it using cheap/used consumer (non-server) parts.

    The CPU especially is like
    SUPER solid - I bought a used #AMD Ryzen 7 1700 back in 2021 for 589.99 MYR (~126 USD) - it had 1 year warranty but meh, the seller def did not entertain my request to have it replaced when I received it with bent pins. I was ready to just give up on it but decided wth I'll just try to bend the pins right back and it's worked perfectly since then.

    I still have a Ryzen 7 3700X + an
    #MSI #B450 MATX board lying around after I had upgraded to a Ryzen 7 5700X + #ASUS #B550 ITX combo, thinking maybe I could upgrade the server with them, but idk if it's even needed. The only bottleneck on my server really is just the 4 DIMM slots on the mobo that have been maxed out to 64GB DDR4 RAM (which I also had bought for cheap) and so that CPU/Mobo upgrade won't help with that.

    I'm really glad I went this route cos having that homelab server has been a gamechanger to my lifestyle since I essentially have a perfect playground I could waste my precious time on. Now really I just need to devise a way to move from
    #VMware #ESXi to #Proxmox but I'm too lazy to figure out how I could do so without losing all of my VMs/#Kubernetes cluster.

  21. I think it's impressive that my #homelab server is still alive and well after 3 years and building it using cheap/used consumer (non-server) parts.

    The CPU especially is like
    SUPER solid - I bought a used #AMD Ryzen 7 1700 back in 2021 for 589.99 MYR (~126 USD) - it had 1 year warranty but meh, the seller def did not entertain my request to have it replaced when I received it with bent pins. I was ready to just give up on it but decided wth I'll just try to bend the pins right back and it's worked perfectly since then.

    I still have a Ryzen 7 3700X + an
    #MSI #B450 MATX board lying around after I had upgraded to a Ryzen 7 5700X + #ASUS #B550 ITX combo, thinking maybe I could upgrade the server with them, but idk if it's even needed. The only bottleneck on my server really is just the 4 DIMM slots on the mobo that have been maxed out to 64GB DDR4 RAM (which I also had bought for cheap) and so that CPU/Mobo upgrade won't help with that.

    I'm really glad I went this route cos having that homelab server has been a gamechanger to my lifestyle since I essentially have a perfect playground I could waste my precious time on. Now really I just need to devise a way to move from
    #VMware #ESXi to #Proxmox but I'm too lazy to figure out how I could do so without losing all of my VMs/#Kubernetes cluster.

  22. AITU Military CTF 2024:Digital Fortress: Как мы инфраструктуру для полигона поднимали. Часть 2

    Перед прочтением данной статьи настоятельно рекомендую ознакомиться с 1 частью Введение Данная статья является продолжением статьи про поднятие киберполигона AITU Military CTF, в этой части мы рассмотрим глобальные изменения касающиеся инфраструктуры, такие как настройка сети ESXI, создание централизованного Firewall и мониторинг. Я решил написать эту статью так как очень мало подобных статей на тему постройки киберполигона, в последнее время в нашей стране начали часто проводится полигоны и в этой статье мы рассмотрим все тонкости настройки инфраструктуры в условиях ограниченности ресурсов.

    habr.com/ru/articles/812445/

    #CTF #киберполигон #ESXI #Mikrotik #OpenVPN #ipam #кибербезопасность #инфраструктура

  23. Домашняя лаба. Как собрать домашнюю инфраструктуру мечты

    Мой первый опыт на Хабре получился не очень положительным, и, признаюсь, до сих пор смотря на комментарии и оценки, мне стыдно и больно где-то внутри. Потому в имя исправления ошибок молодости решил написать серию статей о том, как дома собрать свою "идеальную" хранилку, или сервак, или мини облако. Поделюсь опытом, своими знаниями и пониманием, а также мыслями, которые могут помочь выполняя похожую задачу для себя.

    habr.com/ru/articles/808751/

    #NAS #plexmediaserver #docker #ubuntu #zfsonlinux #synology #esxi #diy #лаба #домашний_сервер

  24. Hey #vmware folks,
    I have a VM that got messed up when I deleted all snapshots.

    From what i understand i'm missing the 02 file.

    Do i really have to revert to the beginning?

    The VM is running fine-- i tried vmotion to a new data store, didn't work. I don't see any snapshots I can delete.

    This happened when i tried to delete all snapshots and it failed.

    #vsphere #vmware #esxi #vcenter

  25. #vmx #esxi #import #proxmox
    Mais comme c'est simple d'importer automatiquement une machine virtuelle à partir du défunt (😜) ESXi vers le goat (👊) Proxmox VE🫵
    peertube-blablalinux.be/w/5Pbm

  26. "@Broadcom terminates @VMware's free #ESXi #hypervisor"

    The end of an era, but no surprise. $AVGO is not a #PLG business w/ a 'bottoms-up' adoption GTM 🤷‍♂️

    There will be gnashing of teeth tho.

    Great pickup @SSharwood via @ElReg

    theregister.com/2024/02/13/bro

  27. Working on the next part of this #Synology #RS2423+ review. Coming soon…

    PART 2: #Backup and restore.

    Focusing on 3 major tools (#HyperBackup, #ActiveBackupForBusiness and #SnapshotReplication) in both client and server role.

    Taking a look at the new full system backup option that HB now offers as well as #ESXI #VM backup over #ABB

    The story begins here: blackvoid.club/synology-rs2423

    #10G #HAT3300 #homelab #selfhosting #macOS #DSM #DSM72

  28. Working on the next part of this #Synology #RS2423+ review. Coming soon…

    PART 2: #Backup and restore.

    Focusing on 3 major tools (#HyperBackup, #ActiveBackupForBusiness and #SnapshotReplication) in both client and server role.

    Taking a look at the new full system backup option that HB now offers as well as #ESXI #VM backup over #ABB

    The story begins here: blackvoid.club/synology-rs2423

    #10G #HAT3300 #homelab #selfhosting #macOS #DSM #DSM72

  29. Working on the next part of this #Synology #RS2423+ review. Coming soon…

    PART 2: #Backup and restore.

    Focusing on 3 major tools (#HyperBackup, #ActiveBackupForBusiness and #SnapshotReplication) in both client and server role.

    Taking a look at the new full system backup option that HB now offers as well as #ESXI #VM backup over #ABB

    The story begins here: blackvoid.club/synology-rs2423

    #10G #HAT3300 #homelab #selfhosting #macOS #DSM #DSM72

  30. Working on the next part of this #Synology #RS2423+ review. Coming soon…

    PART 2: #Backup and restore.

    Focusing on 3 major tools (#HyperBackup, #ActiveBackupForBusiness and #SnapshotReplication) in both client and server role.

    Taking a look at the new full system backup option that HB now offers as well as #ESXI #VM backup over #ABB

    The story begins here: blackvoid.club/synology-rs2423

    #10G #HAT3300 #homelab #selfhosting #macOS #DSM #DSM72

  31. Working on the next part of this #Synology #RS2423+ review. Coming soon…

    PART 2: #Backup and restore.

    Focusing on 3 major tools (#HyperBackup, #ActiveBackupForBusiness and #SnapshotReplication) in both client and server role.

    Taking a look at the new full system backup option that HB now offers as well as #ESXI #VM backup over #ABB

    The story begins here: blackvoid.club/synology-rs2423

    #10G #HAT3300 #homelab #selfhosting #macOS #DSM #DSM72