home.social

Search

55 results for “nicr9”

  1. I've obviously grown accustomed to the breakneck pace of releases back in the #booklore / #vibecoding days

    Migrated to #grimmory after the fork and 2.3.0 release... I've been keeping an eye on pull requests and patiently awaiting the next release... It looks like they're really working hard on overhauling the insides and not focusing too hard on new features right now

    I think the future is bright ✨

  2. I have a raspberry pi 4 (8GB) and I wanted to try out Kodi... Stupid question... Is it possible/advisable to containerise it and run it in kubernetes (I'm thinking microk8s) instead of running a dedicated kodi distro?

    If this sounds crazy, my reasoning is that I want to run some object storage for an unrelated project on the same raspi (but kodi wouldn't rely on it for media, it would use streaming plugin to connect to Jellyfin hosted somewhere else on my LAN)

    Also; I want to manage the microk8s with my existing fluxcd stuff so I can learn about multi-cluster management. Hoping that I can have many "edge raspis" hosting various services throughout the house in the future and this would be great as a PoC

    #selfhosting #selfhost #homelab #kodi #mediaserver #kubernetes #k8s

  3. My god... I've been having #microk8s cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22 #snap

    I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...

    I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?

    If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of #k8s state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭

  4. My god... I've been having #microk8s cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22 #snap

    I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...

    I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?

    If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of #k8s state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭

  5. My god... I've been having #microk8s cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22 #snap

    I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...

    I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?

    If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of #k8s state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭

  6. My god... I've been having #microk8s cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22 #snap

    I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...

    I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?

    If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of #k8s state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭

  7. My god... I've been having cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22

    I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...

    I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?

    If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭

  8. Just had a brainwave! If there was a #canbus integration, #homeassistant would know when the car needs refueling

    Turns out it's not an original idea, which is fantastic! This WiFi enabled CAN adaptor looks like it's worth picking up...

    crowdsupply.com/meatpi-electro

    Has anyone played around with get telemetry from their car into HA or any other fun integrations?

    #selfhosted #homelab

  9. Trying to set up #mariadb using the #linuxserverio image on #kubernetes and it's not going well...

    It's not picking up the root/user passwords I'm setting and at first I was willing to guess I messed up something so I scrapped the pod, pvc and pv and started fresh.

    Then I noticed it was getting OOMKilled seconds after the pod first boots up. Great! Obviously, that affected the db set up and that's why I can't connect... Nope, increasing the pod memory didn't fix it

    Read the LSIO docs again and noticed you don't need a password to sign into root (as long as you're using the OS root account.) Okay, I'm running the image as non-root user so maybe that explains why I can't sign in... But I need to do that because I'm storing db config on pvc backed by NFS which is picky about user access and it's easier to run stuff as a predictable non-root user... And NFS is the only reasonable option I currently have for dynamically provisioned storage which I need because I'm deploying MariaDB as a statefulset... Fuck

  10. Trying to set up #mariadb using the #linuxserverio image on #kubernetes and it's not going well...

    It's not picking up the root/user passwords I'm setting and at first I was willing to guess I messed up something so I scrapped the pod, pvc and pv and started fresh.

    Then I noticed it was getting OOMKilled seconds after the pod first boots up. Great! Obviously, that affected the db set up and that's why I can't connect... Nope, increasing the pod memory didn't fix it

    Read the LSIO docs again and noticed you don't need a password to sign into root (as long as you're using the OS root account.) Okay, I'm running the image as non-root user so maybe that explains why I can't sign in... But I need to do that because I'm storing db config on pvc backed by NFS which is picky about user access and it's easier to run stuff as a predictable non-root user... And NFS is the only reasonable option I currently have for dynamically provisioned storage which I need because I'm deploying MariaDB as a statefulset... Fuck

  11. Trying to set up #mariadb using the #linuxserverio image on #kubernetes and it's not going well...

    It's not picking up the root/user passwords I'm setting and at first I was willing to guess I messed up something so I scrapped the pod, pvc and pv and started fresh.

    Then I noticed it was getting OOMKilled seconds after the pod first boots up. Great! Obviously, that affected the db set up and that's why I can't connect... Nope, increasing the pod memory didn't fix it

    Read the LSIO docs again and noticed you don't need a password to sign into root (as long as you're using the OS root account.) Okay, I'm running the image as non-root user so maybe that explains why I can't sign in... But I need to do that because I'm storing db config on pvc backed by NFS which is picky about user access and it's easier to run stuff as a predictable non-root user... And NFS is the only reasonable option I currently have for dynamically provisioned storage which I need because I'm deploying MariaDB as a statefulset... Fuck

  12. Trying to set up using the image on and it's not going well...

    It's not picking up the root/user passwords I'm setting and at first I was willing to guess I messed up something so I scrapped the pod, pvc and pv and started fresh.

    Then I noticed it was getting OOMKilled seconds after the pod first boots up. Great! Obviously, that affected the db set up and that's why I can't connect... Nope, increasing the pod memory didn't fix it

    Read the LSIO docs again and noticed you don't need a password to sign into root (as long as you're using the OS root account.) Okay, I'm running the image as non-root user so maybe that explains why I can't sign in... But I need to do that because I'm storing db config on pvc backed by NFS which is picky about user access and it's easier to run stuff as a predictable non-root user... And NFS is the only reasonable option I currently have for dynamically provisioned storage which I need because I'm deploying MariaDB as a statefulset... Fuck

  13. In my rush to get a basic docker build tekton pipeline working I made a bit of a noob mistake... Using an emptyDir workspace to store data between tasks.

    I didn't realise that the emptyDir workspace is only persisted for the duration of a specific task so at the end of the clone task it's discarded and a fresh emptyDir is supplied for the build task...

    I was caught off guard cos I didn't really think that would be too useful but after doing some reading it's probably used when running/testing single tasks in isolation. Using it for CI/CD would mean you're not getting the benefit of cached data between runs.

    Setting up a PVC won't take long, I'm just being lazy. Serves me right.

    #tekton #cicd #kubernetes #k8s #homelab #selfhosted

  14. In my rush to get a basic docker build tekton pipeline working I made a bit of a noob mistake... Using an emptyDir workspace to store data between tasks.

    I didn't realise that the emptyDir workspace is only persisted for the duration of a specific task so at the end of the clone task it's discarded and a fresh emptyDir is supplied for the build task...

    I was caught off guard cos I didn't really think that would be too useful but after doing some reading it's probably used when running/testing single tasks in isolation. Using it for CI/CD would mean you're not getting the benefit of cached data between runs.

    Setting up a PVC won't take long, I'm just being lazy. Serves me right.

    #tekton #cicd #kubernetes #k8s #homelab #selfhosted

  15. In my rush to get a basic docker build tekton pipeline working I made a bit of a noob mistake... Using an emptyDir workspace to store data between tasks.

    I didn't realise that the emptyDir workspace is only persisted for the duration of a specific task so at the end of the clone task it's discarded and a fresh emptyDir is supplied for the build task...

    I was caught off guard cos I didn't really think that would be too useful but after doing some reading it's probably used when running/testing single tasks in isolation. Using it for CI/CD would mean you're not getting the benefit of cached data between runs.

    Setting up a PVC won't take long, I'm just being lazy. Serves me right.

    #tekton #cicd #kubernetes #k8s #homelab #selfhosted

  16. In my rush to get a basic docker build tekton pipeline working I made a bit of a noob mistake... Using an emptyDir workspace to store data between tasks.

    I didn't realise that the emptyDir workspace is only persisted for the duration of a specific task so at the end of the clone task it's discarded and a fresh emptyDir is supplied for the build task...

    I was caught off guard cos I didn't really think that would be too useful but after doing some reading it's probably used when running/testing single tasks in isolation. Using it for CI/CD would mean you're not getting the benefit of cached data between runs.

    Setting up a PVC won't take long, I'm just being lazy. Serves me right.

  17. Spent a little time this weekend getting to know Tekton

    People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.

    So far, all I've really done is gone over some of the examples:

    tekton.dev/docs/getting-starte

    Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.

    Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.

    Anyone have any interesting use cases for Tekton in their own work/home?

    #tekton #cicd #gitops #fluxcd #docker #helm #homelab #selfhosted

  18. Spent a little time this weekend getting to know Tekton

    People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.

    So far, all I've really done is gone over some of the examples:

    tekton.dev/docs/getting-starte

    Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.

    Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.

    Anyone have any interesting use cases for Tekton in their own work/home?

    #tekton #cicd #gitops #fluxcd #docker #helm #homelab #selfhosted

  19. Spent a little time this weekend getting to know Tekton

    People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.

    So far, all I've really done is gone over some of the examples:

    tekton.dev/docs/getting-starte

    Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.

    Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.

    Anyone have any interesting use cases for Tekton in their own work/home?

    #tekton #cicd #gitops #fluxcd #docker #helm #homelab #selfhosted

  20. Spent a little time this weekend getting to know Tekton

    People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.

    So far, all I've really done is gone over some of the examples:

    tekton.dev/docs/getting-starte

    Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.

    Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.

    Anyone have any interesting use cases for Tekton in their own work/home?

    #tekton #cicd #gitops #fluxcd #docker #helm #homelab #selfhosted

  21. Spent a little time this weekend getting to know Tekton

    People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.

    So far, all I've really done is gone over some of the examples:

    tekton.dev/docs/getting-starte

    Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.

    Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.

    Anyone have any interesting use cases for Tekton in their own work/home?

  22. @arichtman 100%, layers are the way to go!

    Ohhh, I'm interested in the machine identity aspect too. I was thinking about deploying #openbao at some point but I'm going to have to check out SPIFFE/Spire 👍

  23. Okay, authentik is up! Took a while, I was fighting against flux and the helm release because it deployed with the wrong StorageClass (I forgot to have that configuration ready before release.) Helm wasn't able to modify the PVC because they're immutable, updating the release has to wait for the initial release to succeed (which it won't) or timeout and flux is quiet on the reasons for all of this unless you know where to look 😔 lots of learning was had though!

    Anyway, admin and personal user accounts created, MFA enabled. Got my first application integrated too! (actual budget)

    What next? The world is my oyster... Probably gitea or semaphore. I'm hesitant to integrate services like jellyfin before I have more users onboarded and this gives me an opportunity to experiment with other edge cases like other providers and service accounts and such

    #selfhosted #homelab #authentik #sso #fluxcd #gitops #helm

  24. Part of my project for adopting gitops in my homelab has been setting up git hosting. I've selected Gitea for this instead of Gogs because I thought I'd have an easier time bootstrapping fluxcd...

    Now that I've had more time to sit with the docs and learn more about Gitea, I've noticed they have broad support to serve as package registries too! It seems like I can host helm charts and docker images alongside my infra as code! This is fantastic, it'll greatly simplify my architecture. Two birds, one container!

    #gitops #homelab #selfhost #selfhosting #git #gitea #fluxcd #helm #docker #containers

  25. Anyone using terraform/opentofu for their homelab setups? Either on infra level or for CM?

    I've made it a project for this year to get everything managed via gitops. I'm taking it step by step and as such I haven't locked down manual write access so that I can tinker with stuff and troubleshoot as needed.

    I'm finding that I need a good way to spot state drift so that I get notified if I forget to correct things afterwards. I think this is going to be less frustrating than fighting against enforced state while I get my bearings.

    I guess I could use a cron or a timer unit. Unless someone has any recommendations? I would like to manage all the TF using fluxcd eventually but I think it's too early to start enforcing desired state right now. I'm open to suggestions...?

    #terraform #opentofu #homelab #selfhost #gitops #fluxcd #IaC #infrastructureascode

  26. @Viss @Sempf Do you mean blocking if they're pissed at you calling them out?

    I feel like blocking, while defo a step you can take to shield yourself, doesn't do much to encourage/promote the welcome culture... Do we need to start using a hashtag? Something like #welcomeculture or #shamefree ...