home.social

#infrastructureascode — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #infrastructureascode, aggregated by home.social.

  1. 🚀 𝗤𝘂𝗶𝗰𝗸 𝗴𝘂𝗶𝗱𝗲 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲

    Deploy 𝗥𝗘𝗟𝗜𝗔𝗡𝗢𝗜𝗗 𝗟𝗼𝗮𝗱 𝗕𝗮𝗹𝗮𝗻𝗰𝗲𝗿 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝘃𝟴 on 𝗔𝗪𝗦 with 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 easily using the official module.

    ✔️ Ready-to-use infrastructure (VPC, subnet, security groups)

    ✔️ EC2 instance with RELIANOID AMI

    ✔️ SSH and Web GUI access

    ✔️ Clean teardown with terraform destroy

    👉 relianoid.com/resources/knowle

  2. “Migrations fail when visibility is stale, drift grows, and cutovers go manual.”

    Migrating Puppet environments doesn’t have to be painful.

    Tony Green shares hard‑won lessons from real-world migrations and how to stay in control when things get messy.

    If you’re planning a Puppet migration (or already in the middle of one), this is well worth a read:

    puppet.com/blog/puppet-mirgrat

  3. I just added #Fedora 44 to our Integration Test Target (ITT) lineup:
    👉 github.com/orgs/foundata/repos

    🔍 Looking for #Linux #Containers for your CI/CD pipeline? We’ve built a collection of OCI images:

    ✅ fully functional systemd (not just a shim!)
    ✅ unprivileged execution support, perfect for tools like #Podman.
    ✅ ideal for #Ansible #Molecule testing, see them in action with a collection: github.com/foundata/ansible-co

    #Automation #DevOps #OpenSource #InfrastructureAsCode #foundata

    @fedora
    @ansible

  4. A lot of teams are being told to “use AI in ops” right now. The harder part is figuring out *where it actually helps* day to day without adding risk, noise, or another thing to babysit.

    If you’re curious (or skeptical 👀) about AI in ops, join Robin Tatam and Jason St-Cyr as they share their thoughts on where AI can realistically fit into infrastructure operations today. No magic, just using good tools to do better.

    👉 puppet.com/resources/events/we

  5. 📢 Puppet Continuous Delivery 5.15.0 available with improvements for stability, security, integrations, and usability.

    Highlights include:
    - New external_webhook_url support for proxy-based deployments
    - Impact Analysis updates for Pipelines as Code
    - Clearer GitLab commit status reporting
    - Amazon Linux 2023 support for Docker-based installs
    - Security and dependency updates addressing reported CVEs

    Full release notes:
    help.puppet.com/cdpe/current/C

  6. Puppet Security Compliance Management 3.7.0 is out!

    This release focuses on keeping compliance stable as environments scale:
    - New CIS benchmarks for modern Linux, macOS, and Windows 11
    - More predictable scan performance with tunable JVM memory
    - Stronger session and GraphQL API controls
    - Security fixes and dependency updates (CVE items in the release notes!)

    👇Check out the Release notes:
    help.puppet.com/scm/current/Co

  7. Stop the Stash-Pop Panic! Why Git Worktree is my IaaS Game Changer.

    Have you ever been deep into a complex feature branch, and suddenly… BOOM. A critical bug in main or production needs your immediate attention.

    You reach for git stash. You pray you won't forget where you were. You switch. You fix. You stash pop… and then the anxiety hits. Wait, which stash was that? Did I just overwrite my local terraform state?

    For me, this was the ultimate flow-killer. Until I integrated Git Worktree into my workflow.

    The Problem with the "Standard" Way:
    As an IaaS specialist, my changes aren't just code, they represent infrastructure states. Standard branching meant:
    * git stash my complex IaaS changes.
    * git checkout main and wait for the local environment to sync.
    * Fix the bug, deploy, and verify.
    * git checkout feature and wait again.
    * git stash pop and spend 15 minutes regaining focus.

    The Solution: Git Worktree
    Git Worktree allows you to have multiple checkouts of the same repository in different directories simultaneously. It’s a game manager.
    Instead of switching branches in one folder, I simply add a new worktree:
    git worktree add ../hotfix-folder main
    * Zero Context Switching: My feature branch remains open and untouched in its own folder.
    * Instant Parallelism: I can run a long Terraform plan in one worktree while fixing a bug in another.
    * No Stash Chaos: No more "which stash is which?" or accidental data loss.

    The PyCharm Factor:
    I’m a dedicated PyCharm fan. I love its built-in Shelf tools for quick code shifts. But for IaaS, where context is everything, Worktree takes it to the next level. It’s not about replacing PyCharm’s tools, it’s about giving your IDE multiple entry points into the same project state.

    The Takeaway:
    A worktree is essentially a branch that lives in its own directory. It’s the fastest way to handle "urgent" tasks without losing your "deep work" momentum. If you’re tired of the stash/pop dance, this is your sign to switch.

    #git #gitworktree #iaas #infrastructureascode #pycharm #devops #productivity #workflow #softwareengineering #cloudinfrastructure

  8. Lucee in a Box: The Ultimate Guide to Containerized Dev Servers

    2,726 words, 14 minutes read time.

    The Modern ColdFusion Workspace: Transitioning to Lucee in a Box

    The shift from traditional, monolithic server installations to containerized environments has fundamentally altered how we perceive modern development within the Lucee ecosystem. For years, the standard approach involved installing a heavy application server directly onto a local machine, often leading to a “polluted” operating system where various versions of Java and Lucee competed for resources and environment variables. By adopting a “Lucee in a Box” methodology, we decouple the application logic from the underlying hardware, allowing for a portable, reproducible, and lightweight development stack. This transition is not merely about convenience; it is a strategic move toward parity with production environments where high availability and rapid scaling are the norms. In this architecture, we utilize Docker to encapsulate the Lucee engine, the web server, and the necessary configuration files into a single unit that can be spun up or destroyed in seconds, ensuring that every member of a development team is working within an identical, script-driven environment.

    However, the true complexity of this setup emerges when we move beyond simple “Hello World” examples and begin integrating with the existing corporate infrastructure. In my own workflow, I rely heavily on a network of internal web services that act as the primary conduit for data residing in our production databases. These services are vital because they provide a sanitized, governed layer of abstraction over raw SQL queries, ensuring that sensitive data is handled according to internal compliance standards. When we containerize Lucee, we aren’t just running a script; we are placing a small, isolated node into a complex network. The challenge then becomes ensuring this isolated container can “see” and communicate with those internal services as if it were a native part of the network, all while maintaining the security boundaries that containerization is designed to provide.

    The Data Silo Crisis: Overcoming Networked Service Isolation

    One of the most significant hurdles in modernizing a CFML stack is the inherent isolation of the Docker bridge network, which often creates what I call a “Data Silo” during local development. When a developer attempts to call an internal web service—perhaps a REST API that fetches real-time production metrics or user permissions—from within a container, the request often hits a wall because the container’s internal DNS does not naturally resolve local intranet addresses. This creates a frustrating disconnect where the application works perfectly in the legacy local install but fails within the containerized environment. This disconnect is more than a minor annoyance; it leads to significant delays in the development lifecycle as engineers struggle to pipe in the data necessary for testing complex business logic. Without a seamless connection to these internal services, the “Lucee in a Box” becomes an empty vessel, incapable of performing the data-intensive tasks required in a modern enterprise setting.

    To resolve this, we must look at how the container perceives the outside world and how the host machine facilitates that visibility. In many corporate environments, production data is guarded behind strict firewall rules and SSL requirements that expect requests to originate from known entities. When I utilize internal web services to provide data from a production database, the Lucee container must be configured to pass through the host’s network or be explicitly granted access to the internal DNS suffixes. Failure to address this at the architectural level results in “unreachable host” errors or SSL handshake failures that can derail a project for days. By understanding that the container is a guest on your network, we can begin to implement the routing and trust certificates necessary to turn that siloed container into a fully integrated node capable of consuming live data streams securely and efficiently through modern CFScript syntax.

    The Blueprint: Implementing Lucee and MariaDB via Docker Compose

    To move from theory to implementation, we must define the orchestration layer that brings our environment to life. The docker-compose.yml file is the definitive source of truth for the development stack, eliminating the “it works on my machine” excuse by codifying the server version, database configuration, and network paths. In the professional workflow I advocate, this file sits at the root of your project. It defines a lucee service using the official Lucee image—optimized for performance—and a mariadb service to handle local data persistence. Crucially, we use volumes to map your local www folder directly into the container’s web root. This means that as you write your CFScript in your preferred IDE on your host machine, the changes are reflected instantly inside the container without requiring a rebuild or a manual file transfer.

    The following configuration provides a professional-grade starting point. It establishes a dedicated network for our services and ensures that Lucee has the environment variables necessary to eventually automate its datasource connections. By mounting the ./www directory, we ensure our code remains on our host machine where it can be version-controlled, while the ./db_data volume ensures our MariaDB data persists even if the container is destroyed and recreated.

    version: '3.8'
    
    services:
      # The Database Engine
      mariadb:
        image: mariadb:10.6
        container_name: lucee_db
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: root_password
          MYSQL_DATABASE: dev_db
          MYSQL_USER: dev_user
          MYSQL_PASSWORD: dev_password
        volumes:
          - ./db_data:/var/lib/mysql
        networks:
          - dev_network
    
      # The Lucee Application Server
      lucee:
        image: lucee/lucee:5.3
        container_name: lucee_app
        restart: always
        ports:
          - "8080:8888"
        environment:
          # Injecting DB credentials for CFConfig or Application.cfc
          - DB_HOST=mariadb
          - DB_NAME=dev_db
          - DB_USER=dev_user
          - DB_PASSWORD=dev_password
          - LUCEE_ADMIN_PASSWORD=server_admin_pass
        volumes:
          - ./www:/var/www
          - ./config:/opt/lucee/web
        depends_on:
          - mariadb
        networks:
          - dev_network
    
    networks:
      dev_network:
        driver: bridge
    

    Deployment Strategy: Running Your New Containerized Stack

    Once the docker-compose.yml file is in place, initializing the environment is a matter of a single terminal command. By executing docker-compose up -d from the root of your project directory, the Docker engine pulls the specified images, creates the isolated virtual network, and establishes the volume mounts. This process ensures that your MariaDB instance is ready to receive connections before the Lucee server fully initializes. For developers who rely on internal web services, this is where the containerized approach proves its worth. Because Lucee is running in an isolated network but can be configured to have access to the host’s bridge or external DNS, it can safely consume external APIs while maintaining a clean, local database for session state or cached production data. This setup provides the exact same architectural “feel” as a high-traffic production cluster, but contained entirely within your local hardware.

    The beauty of this system lies in its maintenance-free nature and the elimination of the “dependency hell” that often plagues legacy ColdFusion developers. If you need to test your CFScript against a different version of Lucee or a newer patch of MariaDB, you simply update the version tag in the YAML file and run the command again. There is no need to uninstall software, clear registry keys, or worry about Java version conflicts on your host machine. This modularity is why I utilize internal web services to provide data from production into this local box; the container acts as a secure, high-speed proxy. You can pull the data you need via an internal API call, store it in the MariaDB container, and work in an isolated state without ever risking the integrity of the actual production database.

    Root Cause: Why Standard Containers Fail at Internal Service Integration

    The primary reason most off-the-shelf Lucee container configurations fail when attempting to consume internal web services is a fundamental lack of trust—specifically, the absence of internal SSL certificates within the Java KeyStore. When I use web services hosted within my network to provide data from a production database, those services are almost always secured via an internal Certificate Authority (CA) that is not recognized by the default OpenJDK installation inside the Lucee container. This results in the dreaded “PKIX path building failed” error the moment a cfhttp call is initiated via CFScript to an internal endpoint. To solve this, the Dockerfile must be modified to perform a “copy and import” operation during the image build phase, where the internal CA certificate is added to the Java security folder and registered using the keytool utility. This ensures that the underlying Java Virtual Machine (JVM) trusts the internal network’s identity, allowing for encrypted, secure data transmission from the production-proxy services to the local development environment.

    Beyond the cryptographic hurdles, there is the issue of routing and “Host-to-Container” communication that often stymies developers new to the Docker ecosystem. In a standard Docker setup, the container is wrapped in a layer of Network Address Translation (NAT) that makes it difficult to reach services sitting on the developer’s physical host or the wider corporate VPN. To bridge this gap, we often utilize the extra_hosts parameter within our docker-compose configuration, which effectively injects entries into the container’s /etc/hosts file. This allows us to map a friendly internal domain name, like services.internal.corp, directly to the IP address of the host machine or the VPN gateway. By explicitly defining these routes, we bypass the limitations of Docker’s isolated bridge and enable the Lucee engine to reach out to the web services that house our production data. This architectural “handshake” between the containerized Lucee instance and the physical network is the secret sauce that transforms a basic dev box into a high-fidelity replica of the production ecosystem.

    Deep Dive: Consuming Internal Web Services via CFScript

    With the network and security infrastructure in place, we can finally focus on the implementation layer: the CFScript that handles the data exchange. In a modern Lucee in a Box setup, I favor a service-oriented architecture where a dedicated DataService.cfc handles all interactions with the internal network. Using the http service in CFScript, we can construct requests that include the necessary authentication headers, such as JWT tokens or API keys, required by the internal production data services. The beauty of this approach is that the CFScript remains agnostic of the container’s physical location; as long as the Docker networking layer is correctly mapping the service URL to the internal network, the cfhttp call proceeds as if it were running on a native server. This allows us to maintain a clean, readable codebase that utilizes the latest CFScript features, such as cfhttp(url=targetURL, method="GET", result="local.apiResponse"), while the heavy lifting of network routing is handled by the Docker daemon.

    The real power of this integration is realized when we use these internal web services to populate our local MariaDB instance with a “snapshot” of production-like data. Rather than dealing with massive, cumbersome database dumps that can compromise data privacy, we can write an initialization script in CFScript that queries the internal web services for the specific datasets required for a given task. This script can then parse the returned JSON and perform a series of queryExecute() commands to populate the local MariaDB container. This “just-in-time” data strategy ensures that the developer is always working with relevant, fresh data without the security risks associated with a direct connection to the production database. By leveraging the containerized Lucee instance as a smart bridge between internal network services and local storage, we create a development environment that is not only isolated and secure but also incredibly data-rich and performant.

    Environment Variable Injection: The CFConfig and CommandBox Synergy

    To achieve a truly “hands-off” configuration within a Lucee in a Box environment, we must move away from the manual web-based administrator and toward a purely scripted setup. This is where the combination of CommandBox and the CFConfig module becomes indispensable. By using a .cfconfig.json file or environment variables prefixed with LUCEE_, we can define our MariaDB datasource connections, internal web service endpoints, and mail server settings without ever clicking a button in the Lucee UI. In a professional workflow, this means the docker-compose.yml file serves as the master controller, injecting credentials and network paths directly into the Lucee engine at runtime. For instance, by setting LUCEE_DATASOURCE_MYDB as an environment variable, the containerized engine automatically constructs the connection to the MariaDB container, ensuring that our CFScript-based queryExecute() calls have a reliable target the moment the server is healthy.

    This approach is particularly powerful when dealing with the internal web services that provide our production data. Since these services often require specific API keys or internal proxy settings, we can store these sensitive values in an .env file that is excluded from our Git repository. When the container starts, these values are mapped into the Lucee process, allowing our CFScript logic to access them via system.getEnv(). This ensures that our local development environment remains a mirror of our production logic while maintaining a strict separation of concerns between the application code and the infrastructure-specific secrets. By automating the configuration layer, we eliminate the risk of manual setup errors and ensure that every developer on the team can spin up a fully functional, networked-aware Lucee instance in a single command.

    Advanced Networking: Bridged Access to Production-Proxy Services

    The final piece of the Lucee in a Box puzzle involves fine-tuning the Docker network to handle the high-latency or high-security requirements of internal web services. When our CFScript makes a request to a service that pulls from a production database, we are often traversing multiple layers of internal routing, including VPNs and load balancers. To optimize this, we can configure our Docker bridge network to use specific MTU (Maximum Transmission Unit) settings that match our corporate network’s infrastructure, preventing packet fragmentation that can lead to mysterious request timeouts. Furthermore, by utilizing Docker’s aliases within the network configuration, we can simulate the production URL structure locally. This means our CFScript can call https://api.internal.production/ both in the dev container and the live environment, with Docker handling the redirection to the appropriate internal service endpoint based on the environment context.

    Beyond simple connectivity, we must also consider the performance of these data-heavy web service calls. In a containerized environment, I often implement a caching layer within Lucee that stores the JSON payloads returned from our internal services into the local MariaDB instance or a RAM-based cache. By using CFScript’s cachePut() and cacheGet() functions, we can significantly reduce the load on our internal network and the production database proxy. This “lazy-loading” strategy allows us to develop complex features with the speed of local data access while still maintaining the accuracy of production-sourced information. This architectural decision—balancing live service integration with local persistence—represents the pinnacle of the Lucee in a Box philosophy, providing a development experience that is as fast as it is faithful to the real-world environment.

    Conclusion: The Future of Scalable CFML Development

    Adopting a “Lucee in a Box” strategy is more than just a trend in containerization; it is a fundamental shift toward professional-grade, reproducible engineering. By strictly defining our environment through docker-compose.yml, automating our security through SSL injection in the Dockerfile, and utilizing CFScript to bridge the gap between internal web services and local MariaDB storage, we create a stack that is resilient to “configuration drift.” This setup allows us to treat our development servers as ephemeral, disposable assets that can be rebuilt at a moment’s notice to match evolving production requirements. As the Lucee ecosystem continues to mature, the ability to orchestrate these complex data flows within a containerized boundary will remain the hallmark of a high-performing development team, ensuring that we spend less time debugging infrastructure and more time writing the logic that drives our applications forward.

    Call to Action


    If this post sparked your creativity, don’t just scroll past. Join the community of makers and tinkerers—people turning ideas into reality with 3D printing. Subscribe for more 3D printing guides and projects, drop a comment sharing what you’re printing, or reach out and tell me about your latest project. Let’s build together.

    D. Bryan King

    Sources

    Disclaimer:

    The views and opinions expressed in this post are solely those of the author. The information provided is based on personal research, experience, and understanding of the subject matter at the time of writing. Readers should consult relevant experts or authorities for specific guidance related to their unique situations.

    Related Posts

    Rate this:

    #APIAuthentication #Automation #backendDevelopment #BridgeNetwork #cacerts #CFConfig #CFML #cfScript #CICD #CloudNative #Coldfusion #CommandBox #ConfigurationDrift #containerization #DataIntegration #DatabaseMigration #DatabaseProxy #DeepDive #deployment #devops #Docker #DockerCompose #EnterpriseDevelopment #environmentVariables #InfrastructureAsCode #InternalAPIs #ITInfrastructure #JavaKeyStore #JSON #JVM #JWT #localDevelopment #Lucee #LuceeInABox #MariaDB #microservices #Networking #OpenJDK #OrtusSolutions #Persistence #PortForwarding #Portability #ProductionData #ReproducibleEnvironments #RESTAPI #scalability #Scripting #SDLC #SecureDevelopment #softwareArchitecture #SQL #SSLCertificates #TechnicalGuide #Volumes #WebApplication #WebServer #WebServices #WorkflowOptimization
  9. New in Talos Linux:

    Out-of-memory handling can proactively identify and evict the relevant, resource-heavy application before it destabilizes the host. This reduces avoidable downtime and ensures the control plane and critical services remain operational.

    siderolabs.com/blog/talos-omni

    #TalosLinux #Kubernetes #K8s #BareMetal #PlatformEngineering #DevOps #InfrastructureAsCode #GitOps #SRE #EdgeComputing #CloudNative #BareMetalK8s

  10. Infinito.Nexus vs. YunoHost

    Similar Vision, Different Architectural Layer When people first hear about Infinito.Nexus, a common question is: “Isn’t this basically like YunoHost?” It’s a fair question. Both projects support digital sovereignty.Both are open source.Both aim to reduce dependency on Big Tech platforms. But they operate at fundamentally different architectural layers. YunoHost → https://yunohost.org Infinito.Nexus → https://infinito.nexus The Core Difference The most important distinction: Infinito.Nexus is not an operating system. YunoHost behaves like a server distribution — a tightly integrated system environment that packages applications into a controlled OS base. Infinito.Nexus, by contrast, is a provisioning and orchestration framework. It does not replace the operating system.It provisions and orchestrates infrastructure on top of it. This architectural choice makes Infinito.Nexus significantly more scalable and flexible. Instead of being tied to a specific system base, it operates across environments — allowing infrastructure to grow without requiring replatforming. […]

    blog.infinito.nexus/blog/2026/

  11. Post 5/x – Der Neubau: Ein Fundament aus Stahlbeton
    Nach dem Einsturz war klar: Keine Reparaturen. Ein kompletter Neubau auf einem unerschütterlichen Fundament musste her.

    Die architektonische Entscheidung: Weg von der spezialisierten (und hier gescheiterten) openSUSE-Umgebung, hin zum Industriestandard für Stabilität: Debian 13 "Trixie".

    Auf einer neuen Proxmox-VM wurde Debian minimal installiert und eine saubere Docker-Version aufgesetzt. Wichtigste Verbesserung: Alle Dienste werden jetzt über eine einzige docker-compose.yml definiert. Infrastructure as Code. Sauber & reproduzierbar.

    #Debian #DockerCompose #InfrastructureAsCode #BestPractice #SysAdmin

  12. I just added openSUSE Leap 16.0 to our Integration Test Target (ITT) lineup:
    👉 github.com/orgs/foundata/repos

    🔍 Looking for #Linux #Containers for your CI/CD pipeline? We’ve built a collection of OCI images:

    ✅ fully functional systemd (not just a shim!)
    ✅ unprivileged execution support, perfect for tools like #Podman.
    ✅ ideal for #Ansible #Molecule testing, see them in action with a collection: github.com/foundata/ansible-co

    #Automation #DevOps #OpenSource #InfrastructureAsCode #foundata

    CC: @SUSE

  13. I just updated the #Fedora 43 image of our Integration Test Target (ITT) lineup:
    👉 github.com/orgs/foundata/repos

    🔍 Looking for #Linux #Containers for your CI/CD pipeline? We’ve built a collection of OCI images:

    ✅ fully functional systemd (not just a shim!)
    ✅ unprivileged execution support, perfect for tools like #Podman.
    ✅ ideal for #Ansible #Molecule testing, see them in action with a collection: github.com/foundata/ansible-co

    #Automation #DevOps #OpenSource #InfrastructureAsCode #foundata #Linux

  14. NixOS – Ein ungewöhnliches Linux für reproduzierbare Systeme

    Warum du NixOS kennen solltest Wenn du als Administrator nach einem Betriebssystem suchst, das Updates, Rollbacks und Konfigurationen deutlich berechenbarer macht als klassische Distributionen, kommst du an NixOS kaum vorbei. NixOS verfolgt einen radikal anderen Ansatz: Dein gesamtes System – vom Bootloader bis zu Diensten – wird deklarativ in Konfigurationsdateien beschrieben und dann aus dieser Beschreibung reproduzierbar gebaut. Das ist vor allem dann spannend, wenn du viele Systeme […]

    andreas-moor.de/nixos-ein-unge

  15. 📣 If you're managing domains and DNS while pursuing compliance certifications, Infrastructure as Code isn't optional, it's essential 👊.
    The DNSimple Terraform provider makes this possible with full domain lifecycle management, giving you the tools to manage #domains and #DNS with the same rigor you apply to other critical infrastructure.
    ❌ No more manual tweaks risking errors or failed reviews.

    👉 blog.dnsimple.com/2025/12/doma

    #SOC2 #ISO27001 #Compliance #AuditReadiness, #infrastructureAsCode

  16. My first writeup from #KubeCon! Delving into the Red Hat #OpenShift Commons Day 0 co-located event, where speakers from Ford Motor Company and Northrop Grumman shared compelling presentations.

    At these companies, #AIinfrastructure is now integral to the responsibilities of #platformengineers. Drawing on extensive experience in managing #IDPs at scale, they apply practices such as #GitOps, #infrastructureascode, and federated workload #identitymanagement.

    Key takeaways from the presentations here: techtarget.com/searchitoperati

    #TechTalk #Innovation #AI #InfrastructureManagement

  17. 🏠 Homelab Backup Evolution! 🏠

    Following the "3-2-1 is the minimum" rule, I've expanded my VPS container backup strategy:

    ✅ Hetzner Cloud (Borg) → Offsite long-term storage
    ✅ Synology NAS (rsync) → Local fast recovery

    The new setup does nightly automated syncs of all /opt/containers/ data to my Synology - with deduplication and all the bells and whistles! 📦

    Particularly clever: hardlinks for space-efficient snapshots and morning email reports. Now I know right with my coffee ☕ whether all backups ran cleanly.

    Lesson learned: Cloud-only is good, but having a local NAS mirror for quick restores is pure gold! 💪

    How do you solve this in your setups? Also multi-tier or everything to cloud?

    #Homelab #Backup #SelfHosting #Synology #VPS #DataSafety #321Rule #TechLife #NAS #CloudBackup #rsync #BorgBackup #InfrastructureAsCode

  18. If you're running a cloud environment where different accounts represent different environments, set them up consistently! Use IaC to help ensure environments are set up the same.

    I just got off a 4 hour call that could have been avoided if security groups had been set up consistently.

    #cloud #devOps #infrastructureAsCode #awsCommunity