home.social

Search

191 results for “zrail”

  1. @zrail @vbfox to be fair, there's two generations of ZBT by now and #homeassistant #nabucasa could have built that in.

  2. @zrail @vbfox to be fair, there's two generations of ZBT by now and #homeassistant #nabucasa could have built that in.

  3. @zrail @vbfox to be fair, there's two generations of ZBT by now and #homeassistant #nabucasa could have built that in.

  4. @zrail @vbfox to be fair, there's two generations of ZBT by now and #homeassistant #nabucasa could have built that in.

  5. @zrail @vbfox to be fair, there's two generations of ZBT by now and #homeassistant #nabucasa could have built that in.

  6. @zrail I've managed some #fedora #coreos to run dozens of #podman container. Works nicely.

    I can depend zincati to auto-update os, and podman-auto-update systemd service to update container isafely. #fcos update is atomic update.

    No need to doing manual major distro upgrade because zincati take care of that.

    Yes, u should monitor memory usage. You can run #promotheus & grafana container to monitor it. Also there is prometheus-podman-exporter to monitor podman container with prometheus easily.

  7. @zrail If it happened more frequently I would probably be to pissed. I mean if it starts happening more it's going to be a problem :). I'm also steering more to local storage and build HA a level above, if possible. it's not #homeprod yet! I know people have reservations about using usb/thunderbolt for storage backends! It seems to be fine.. I guess until it's not.

  8. @zrail I'm still considering keeping #coredns just for my #homelab services. #dnsdist has the ability for me to move traffic between "pools" via source. So right now I'm doing that for all of my homelab sources and everything else runs through the "home" stuff.

  9. @zrail That's the one iffy piece for me. The switches have the option of a local on-device controller, but the APs don't.

    For what I'm looking to do, it may not be a big deal, but something I have to discuss with others here. I'm trying to balance features with ease of use and simplicity.

    I've seen come up in a few conversations, as well as and as other options in this space.

  10. @zrail Does it fit on 'normal' hardware? (#RaspberryPi or #JetsonNano or even #Parallella.) The docs seem very intentional about requiring a coral for real work. Event categorization/identification is pure gravy, but it would be awesome gravy.

    I never grabbed a coral in the Before Times, and a quick search says that buying one now is as difficult as sourcing a Pi. (And just like everyone else, I'm also desperately seeking CM4s with memory at reasonable prices. #TuringPi ships soon.)

  11. @zrail I also thought a lot how to setup my #Futro ThinClient.
    Use #Proxmox or just use it as docker host???

    Finally I went with Proxmox because I want to try things and some of these are published as full ISOs. I also wanted to have a more strong separation between internal and external services.

  12. @zrail 💙 at it's core coder/coder creates things, turns them on and turns them off. Anything that can be terraformed can be managed.

    Like right now I (@ghuntley) am working on a M1 mac but am doing some work on to add and support. After this done will be adding support for to coder/coder.

    Joe (the maintainer of ) uses coder/coder to provision his teaching environments. See coder.com/blog/dedicated-devel

  13. Just like in #yolo fashion. I had a hell of a run yesterday when I shut off everything and set fire to it! @zrail Sir! Please go jump in a cold lake for fueling my delusions of, "everything is fine!"

    I just don't know why I run around with a hammer looking for screws! The #homelab charge continues!

    I have shut down two of my #proxmox servers and they are now running #incus on #debian

    My #UptimeKuma looks like it's been shot and bleeding out.

    and I aint #automating shit! 😂. I'm an idiot

  14. Ok quick update: I got Proxmox SDN working with VXLAN and Vnets across the cluster!

    To reproduce:

    1. Install SDN per instructions (about three easy steps per node). See docs: pve.proxmox.com/wiki/Software_
    2. Add a Zone at the SDN datacenter level. Specify Zone name and Prox nodes to apply to.
    3. Add a Vnet at the SDN datacenter level. Specify zone, Vnet name, and VXLAN ID.
    4. Apply SDN configuration, this pushes the Vnet config to each Prox node.
    5. Add/replace interface on target VM. In my case for testing I added an interface targeting the new Vnet and specified IPv4 statics on two VMs on separate prox nodes and pinged each other.

    @zrail @r3pek @junq

    #homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting

  15. The SDN / VXLAN Proxmox saga continues...

    After posting this I noticed some strange behavior. I was getting ping packets fine and nmap was showing the https service for my new firewall. The problem was when I would navigate to my new firewall's management site it wouldn't work. I would get ssl_error_rx_record_too_long on Firefox and timeouts on Chrome.

    I opened up Wireshark and noticed the return traffic for SSL was severely delayed and appeared malformed.

    What I missed in my instructions is that VXLAN takes up 50 bytes for encapsulation, so for the endpoints within the internal network I had to set a custom MTU of 1450 so that the VXLAN encapsulation could happen within the 1500 limit of the interface on the Proxmox nodes.

    After configuring this on one of the internal machines and it worked to get to the site I thought maybe I had to also configure the 1450 MTU on the firewall's internal interface. I did that and was immediately getting some rapid drop-connect-and-drops repeatedly on my home network so I reverted that change. I really don't know why changing the MTU on the internal interface of the firewall would cause that on my main network but it did, so I reverted it. It seems any device on that internal LAN will need the MTU change other than the firewall, for all the traffic to work properly.

    Now it looks like the next thing to do is to start putting various machines behind the new routers to start segmenting my lab network, and get it off of the flat network for increased security and traffic isolation and control.

    The Proxmox guide I linked earlier will give more details on the 50 byte allocation for VXLAN.

    @zrail @r3pek @junq @train

    #homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhostingmastodon #MTU

  16. *holds flashlight up to chin*

    It was DNS the WHOLE TIME!

  17. Not me significantly glancing at my #selfhosted #forgego again.

    I should actually calculate the 9s on that thing so I can say with confidence that it has better uptime than GitHub.

  18. RustFS, a distributed object storage system written by LLMs, but in rust so it's fine everything is fine, hardcoded a fully privileged API token, apparently from Alpha.13 through Alpha.77.

    If you're running any of those versions, probably you want to upgrade asap.

    nvd.nist.gov/vuln/detail/CVE-2

    #rustfs #selfhosted #selfhosting #homelab

  19. I'm starting to regret my choices wrt installing #Proxmox and one big VM to run Docker containers.

    The problem is that if I give the VM too much memory the host starts swapping which causes perf to tank, but the line of "too much" is hard to determine because the VM seems to take more than it's share eventually.

    Maybe I should just pave the machines with plain Debian or something. Or maybe uCore? Anyone have thoughts about #CoreOS and derivatives?

    #homelab #selfhosted

  20. In lieu of "normal" dynamic #dns, I now have the following monstrosity:

    1. #CoreDNS running in @flydotio (static IP and anycast UDP, naturally) with the JSON plugin, targeting a VPS

    2. VPS running Caddy proxies the HTTP request from the JSON plugin to a VM running in my #homelab over a @tailscale subnet router

    3. VM makes an HTTP request back to an app running on the VPS to get external IP

    4. VM returns JSON formatted how the JSON plugin expects

    5. VPS returns the proxied request back to CoreDNS running at Fly

    6. CoreDNS caches the response and returns it to the requestor

    Definitely won't regret this any time soon.

  21. In lieu of "normal" dynamic #dns, I now have the following monstrosity:

    1. #CoreDNS running in @flydotio (static IP and anycast UDP, naturally) with the JSON plugin, targeting a VPS

    2. VPS running Caddy proxies the HTTP request from the JSON plugin to a VM running in my #homelab over a @tailscale subnet router

    3. VM makes an HTTP request back to an app running on the VPS to get external IP

    4. VM returns JSON formatted how the JSON plugin expects

    5. VPS returns the proxied request back to CoreDNS running at Fly

    6. CoreDNS caches the response and returns it to the requestor

    Definitely won't regret this any time soon.

  22. In lieu of "normal" dynamic , I now have the following monstrosity:

    1. running in @flydotio (static IP and anycast UDP, naturally) with the JSON plugin, targeting a VPS

    2. VPS running Caddy proxies the HTTP request from the JSON plugin to a VM running in my over a @tailscale subnet router

    3. VM makes an HTTP request back to an app running on the VPS to get external IP

    4. VM returns JSON formatted how the JSON plugin expects

    5. VPS returns the proxied request back to CoreDNS running at Fly

    6. CoreDNS caches the response and returns it to the requestor

    Definitely won't regret this any time soon.

  23. In lieu of "normal" dynamic #dns, I now have the following monstrosity:

    1. #CoreDNS running in @flydotio (static IP and anycast UDP, naturally) with the JSON plugin, targeting a VPS

    2. VPS running Caddy proxies the HTTP request from the JSON plugin to a VM running in my #homelab over a @tailscale subnet router

    3. VM makes an HTTP request back to an app running on the VPS to get external IP

    4. VM returns JSON formatted how the JSON plugin expects

    5. VPS returns the proxied request back to CoreDNS running at Fly

    6. CoreDNS caches the response and returns it to the requestor

    Definitely won't regret this any time soon.

  24. In lieu of "normal" dynamic #dns, I now have the following monstrosity:

    1. #CoreDNS running in @flydotio (static IP and anycast UDP, naturally) with the JSON plugin, targeting a VPS

    2. VPS running Caddy proxies the HTTP request from the JSON plugin to a VM running in my #homelab over a @tailscale subnet router

    3. VM makes an HTTP request back to an app running on the VPS to get external IP

    4. VM returns JSON formatted how the JSON plugin expects

    5. VPS returns the proxied request back to CoreDNS running at Fly

    6. CoreDNS caches the response and returns it to the requestor

    Definitely won't regret this any time soon.

  25. Stormlight Archive is Marvel fanfic and Kaladin is Iron Man send toot.

    #StormlightArchive

  26. I now have a tiny #Kubernetes to play with. I'm using #k0s with one controller VM and two workers, one VM and one bare metal, all running on Debian.

    So far I installed cilium (although I'm not sure why I did that, tbh) and set up democratic-csi such that I can create PersistentVolumeClaims on my big ZFS array over NFSv4.

    I think my next step is to install the #Tailscale operator and actually spin up a service.

    Everything has "test" in the name so probably after I get a service running I'll tear it all down and make a for-realsies cluster.

    #HomeLab #SelfHosted