-
My god... I've been having #microk8s cluster issues all week and I finally figured out that it's because of a failed auto-refresh of the core22 #snap
I've tried various approaches; clearing cache, removing downloaded snaps manually and I've been walking through the refresh again and nope - still fails to 'copy snap data'...
I know that disk space isn't an issue during the upgrade and the logs are terse so I'm really convinced I've no other path forward and that a rollback is warranted... BUT... Snapd won't allow me to revert the package because it's not currently active (because the upgrade failed)... Hurray! When an unattended upgrade fails, you're not allow to roll back!! Is that considered a desirable feature in a package manager?
If I can't find a safe way to restore this base-snap, I'll be forced to pursue more drastic measures which could result in loss of #k8s state and I'll need to rebuild my cluster... and I've literally been trying to work on disaster recovery for cluster when this all happened 😭
-
Trying to set up #mariadb using the #linuxserverio image on #kubernetes and it's not going well...
It's not picking up the root/user passwords I'm setting and at first I was willing to guess I messed up something so I scrapped the pod, pvc and pv and started fresh.
Then I noticed it was getting OOMKilled seconds after the pod first boots up. Great! Obviously, that affected the db set up and that's why I can't connect... Nope, increasing the pod memory didn't fix it
Read the LSIO docs again and noticed you don't need a password to sign into root (as long as you're using the OS root account.) Okay, I'm running the image as non-root user so maybe that explains why I can't sign in... But I need to do that because I'm storing db config on pvc backed by NFS which is picky about user access and it's easier to run stuff as a predictable non-root user... And NFS is the only reasonable option I currently have for dynamically provisioned storage which I need because I'm deploying MariaDB as a statefulset... Fuck
-
In my rush to get a basic docker build tekton pipeline working I made a bit of a noob mistake... Using an emptyDir workspace to store data between tasks.
I didn't realise that the emptyDir workspace is only persisted for the duration of a specific task so at the end of the clone task it's discarded and a fresh emptyDir is supplied for the build task...
I was caught off guard cos I didn't really think that would be too useful but after doing some reading it's probably used when running/testing single tasks in isolation. Using it for CI/CD would mean you're not getting the benefit of cached data between runs.
Setting up a PVC won't take long, I'm just being lazy. Serves me right.
-
Spent a little time this weekend getting to know Tekton
People use it at work to build a golden path CI/CD pipeline in the form of a helm chart and it'd be nice if I could dive in and make changes to push upstream as needed.
So far, all I've really done is gone over some of the examples:
https://tekton.dev/docs/getting-started/
Pretty happy with it so far, set up is much easier than gitea actions and I think this will just be easier to maintain. The kubernetes/manifest-driven interface fits nicely with my gitops/fluxcd strategy but I can still trigger builds manually with the CLI as needed.
Next step is to port over some of my existing image builds and then I'll see if I can make my own image build pipeline as a helm chart so it's reusable.
Anyone have any interesting use cases for Tekton in their own work/home?
#tekton #cicd #gitops #fluxcd #docker #helm #homelab #selfhosted