Search
59 results for “CodeDead”
-
Refactor your code with default lambda parameters - .NET Blog
This article concludes a series on C# 12, highlighting "default lambda parameters" which simplify expressing default values in lambdas, enhancing code readability and maintainability
#CSharp12 #Charp #CodeReadability #Maintainability #Programming #DotNet #SoftwareDevelopment
https://devblogs.microsoft.com/dotnet/refactor-your-code-with-default-lambda-parameters/
-
Refactor your code with default lambda parameters - .NET Blog
This article concludes a series on C# 12, highlighting "default lambda parameters" which simplify expressing default values in lambdas, enhancing code readability and maintainability
#CSharp12 #Charp #CodeReadability #Maintainability #Programming #DotNet #SoftwareDevelopment
https://devblogs.microsoft.com/dotnet/refactor-your-code-with-default-lambda-parameters/
-
Refactor your code with default lambda parameters - .NET Blog
This article concludes a series on C# 12, highlighting "default lambda parameters" which simplify expressing default values in lambdas, enhancing code readability and maintainability
#CSharp12 #Charp #CodeReadability #Maintainability #Programming #DotNet #SoftwareDevelopment
https://devblogs.microsoft.com/dotnet/refactor-your-code-with-default-lambda-parameters/
-
#Development #Guidelines
The physics of readability · “Code that is read together should be written together.” https://ilo.im/14khrg_____
#CodeReadability #WebDev #Code #Maintainability #CodeLoyality -
World of the Apocalypse or Peaceful Apocalypse?
Kiczynski warned of an "apocalypse of big language models" and removed 138,000 lines of code. So Linux 7.1 was born...
The developer of Linux network components, Jakub Kiczynski, warned in his merge request about the possible threat associated with the development of large language models. He called this phenomenon the "LLM apocalypse". As a result of this request, 138,000 lines of code were removed from the operating system kernel.
Linus Torvalds, founder and chief developer of Linux, has approved this solution for inclusion in version 7.1-rc1, which will be released on April 26, 2026. This step is the first time in the history of Linux, when error messages generated by artificial intelligence led to the removal of functioning software.
Kiczynski argued that some parts of the code that the world had disabled many years ago were still compiled in the kernel. He believed that in order to survive the "apocalypse of big language models", this code should either be transferred to a new owner or deleted.
The changes made to the core affected six subsystems and included not only the removal of 138,000 rows, but also the introduction of 12,996 new changesets. These network code-related protocols have been permanently removed from the kernel.
In the coming months, the updated kernel will be installed on all server, mobile and embedded devices running Linux, which underlines the importance of this solution for system security and stability.
---
Linux 7.0: Bash script, weekend and 23 years of fixes
On April 12, 2026, Linux 7.0 was released, which introduced several significant changes to the development of the operating system. One of the key innovations was the official introduction of the Rust programming language, which is now used in the kernel. In addition, Linux has started using artificial intelligence to analyze and correct errors.
Linus Torvalds called these changes the new normal, emphasizing their importance for the future development of the system.
The story behind these changes began with Nicholas Carlini, who spent several months running a bash script on his laptop. This script performed a simple but unusual process: it opened the kernel source code files, passed them to the Claude Opus 4.6 artificial intelligence model, and asked it to find vulnerabilities.
Carlini did not expect much results, but one day the model discovered a critical vulnerability in the code used to share files over the network. This code was used in a variety of systems, including company file servers, hospital storage devices, school servers, and cloud storage for companies such as AWS, Google Cloud, and Azure.
The vulnerability was so serious that even an intern who connected to the office Wi-Fi network could run a short script and gain access to the file server. It could read sensitive data, delete important files, and install malware.
These developments highlighted the need to use artificial intelligence to analyze code and detect vulnerabilities. Linus Torvalds and the Linux developers decided to use this technology to improve the security and stability of the system.Apocalypse for browsers, search engines, and Tor In the age of artificial intelligence (AI), the software world is on the verge of significant changes. New technologies, such as large language models (LLMs), open up opportunities to automatically search for vulnerabilities in code. This can lead to serious consequences for many projects, including Mozilla, Google, and Tor. Danger of dead code Dead code, i.e. code that is no longer used, is a serious threat. It can become a target for attacks that LLMs are now able to detect automatically. Previously, detecting such vulnerabilities required considerable effort and time, but with the advent of LLM, the process has become much faster and more efficient. Changing the Vulnerability Search economy Previously, the search for vulnerabilities required the services of expensive experts who conducted a week-long code analysis. Now, just run the script overnight, and LLMs can analyze thousands of lines of code and find potential problems. This changes the economics of finding vulnerabilities and makes them more accessible. Projects under threat Many large projects, such as Mozilla, Google, and others, have extensive archives of outdated code. This can become a problem when LLMs start finding vulnerabilities in these codebases. Companies will be forced to either audit and remove dead code, or explain to their users why their data may be compromised due to outdated protocols. Solution for Tor This problem is particularly acute for the Tor project. Old relays are not updated due to operators ' concerns about losing their reputation. This makes the network vulnerable to attacks. However, thanks to the new approach proposed by the authors, Tor can quickly and safely update its relays without risking its reputation. Rapid response mechanism The authors suggested creating a registry of allowed images, using the TPM as a flag keeper, and binding the identity to the key, not to the content. This will allow operators to quickly and safely update their relays, even if they see that the new image is in the registry. This way, an attack can be closed in hours, not months. The future of Mozilla, Google, and others Mozilla, Google, and other companies will also need to review their code. Outdated APIs, experimental features, and drivers for outdated hardware can be a source of vulnerabilities. Companies will be forced to either remove dead code or explain to their users why their data may be compromised due to outdated technologies. Thor got his chance While other companies will struggle with the consequences of detecting vulnerabilities, Tor will be able to use the proposed mechanism to quickly and safely update its relays. This will allow the project to remain decentralized and resistant to attacks. Thus, Tor was able not only to survive in the era of the LLM apocalypse, but also to become more secure and efficient. Conclusion The authors showed how new technologies can be used to solve old problems. They not only gave Tor the opportunity to change, but also demonstrated how to survive in the era of the LLM apocalypse. This is an important step forward in software development and security.
-
World of the Apocalypse or Peaceful Apocalypse?
Kiczynski warned of an "apocalypse of big language models" and removed 138,000 lines of code. So Linux 7.1 was born...
The developer of Linux network components, Jakub Kiczynski, warned in his merge request about the possible threat associated with the development of large language models. He called this phenomenon the "LLM apocalypse". As a result of this request, 138,000 lines of code were removed from the operating system kernel.
Linus Torvalds, founder and chief developer of Linux, has approved this solution for inclusion in version 7.1-rc1, which will be released on April 26, 2026. This step is the first time in the history of Linux, when error messages generated by artificial intelligence led to the removal of functioning software.
Kiczynski argued that some parts of the code that the world had disabled many years ago were still compiled in the kernel. He believed that in order to survive the "apocalypse of big language models", this code should either be transferred to a new owner or deleted.
The changes made to the core affected six subsystems and included not only the removal of 138,000 rows, but also the introduction of 12,996 new changesets. These network code-related protocols have been permanently removed from the kernel.
In the coming months, the updated kernel will be installed on all server, mobile and embedded devices running Linux, which underlines the importance of this solution for system security and stability.
---
Linux 7.0: Bash script, weekend and 23 years of fixes
On April 12, 2026, Linux 7.0 was released, which introduced several significant changes to the development of the operating system. One of the key innovations was the official introduction of the Rust programming language, which is now used in the kernel. In addition, Linux has started using artificial intelligence to analyze and correct errors.
Linus Torvalds called these changes the new normal, emphasizing their importance for the future development of the system.
The story behind these changes began with Nicholas Carlini, who spent several months running a bash script on his laptop. This script performed a simple but unusual process: it opened the kernel source code files, passed them to the Claude Opus 4.6 artificial intelligence model, and asked it to find vulnerabilities.
Carlini did not expect much results, but one day the model discovered a critical vulnerability in the code used to share files over the network. This code was used in a variety of systems, including company file servers, hospital storage devices, school servers, and cloud storage for companies such as AWS, Google Cloud, and Azure.
The vulnerability was so serious that even an intern who connected to the office Wi-Fi network could run a short script and gain access to the file server. It could read sensitive data, delete important files, and install malware.
These developments highlighted the need to use artificial intelligence to analyze code and detect vulnerabilities. Linus Torvalds and the Linux developers decided to use this technology to improve the security and stability of the system.Apocalypse for browsers, search engines, and Tor In the age of artificial intelligence (AI), the software world is on the verge of significant changes. New technologies, such as large language models (LLMs), open up opportunities to automatically search for vulnerabilities in code. This can lead to serious consequences for many projects, including Mozilla, Google, and Tor. Danger of dead code Dead code, i.e. code that is no longer used, is a serious threat. It can become a target for attacks that LLMs are now able to detect automatically. Previously, detecting such vulnerabilities required considerable effort and time, but with the advent of LLM, the process has become much faster and more efficient. Changing the Vulnerability Search economy Previously, the search for vulnerabilities required the services of expensive experts who conducted a week-long code analysis. Now, just run the script overnight, and LLMs can analyze thousands of lines of code and find potential problems. This changes the economics of finding vulnerabilities and makes them more accessible. Projects under threat Many large projects, such as Mozilla, Google, and others, have extensive archives of outdated code. This can become a problem when LLMs start finding vulnerabilities in these codebases. Companies will be forced to either audit and remove dead code, or explain to their users why their data may be compromised due to outdated protocols. Solution for Tor This problem is particularly acute for the Tor project. Old relays are not updated due to operators ' concerns about losing their reputation. This makes the network vulnerable to attacks. However, thanks to the new approach proposed by the authors, Tor can quickly and safely update its relays without risking its reputation. Rapid response mechanism The authors suggested creating a registry of allowed images, using the TPM as a flag keeper, and binding the identity to the key, not to the content. This will allow operators to quickly and safely update their relays, even if they see that the new image is in the registry. This way, an attack can be closed in hours, not months. The future of Mozilla, Google, and others Mozilla, Google, and other companies will also need to review their code. Outdated APIs, experimental features, and drivers for outdated hardware can be a source of vulnerabilities. Companies will be forced to either remove dead code or explain to their users why their data may be compromised due to outdated technologies. Thor got his chance While other companies will struggle with the consequences of detecting vulnerabilities, Tor will be able to use the proposed mechanism to quickly and safely update its relays. This will allow the project to remain decentralized and resistant to attacks. Thus, Tor was able not only to survive in the era of the LLM apocalypse, but also to become more secure and efficient. Conclusion The authors showed how new technologies can be used to solve old problems. They not only gave Tor the opportunity to change, but also demonstrated how to survive in the era of the LLM apocalypse. This is an important step forward in software development and security.
-
Paul Graham writes Lisp code that's mostly understandable... unless you try reading it 🤔. With names so short they resemble a cat walking on a keyboard, and a love for deeply nested "if" statements, it's a wonder anyone can decipher his cryptic masterpieces. Apparently, loops are for amateurs 🌀.
https://courses.cs.northwestern.edu/325/readings/graham/graham-notes.html #PaulGraham #LispCode #CrypticMasterpieces #ProgrammingHumor #CodeReadability #HackerNews #ngated -
Ah yes, the infamous X Macros, the pattern that promises to revolutionize your code while simultaneously making it as readable as hieroglyphics 🤔. This blog post attempts to glorify a relic from the C programming crypt, but let's be honest: the only thing more confusing than X Macros is understanding why anyone would willingly dive into this rabbit hole in 2023 🕳️🐇.
https://danilafe.com/blog/chapel_x_macros/ #XMacros #CProgramming #CodeReadability #TechHumor #CodingTrends #HackerNews #ngated -
https://kitfucoda.medium.com/the-versatility-of-call-a-python-developers-secret-weapon-a6bff776971a
Python's __call__ dunder offers a closure alternative. It manages state in a class, callable like a function, opening new code structuring possibilities.
In complex scenarios, __call__ enhances readability and maintainability. Encapsulating logic and data promotes cleaner code, improving clarity and debugging.
For pre-filling arguments, consider __call__ over functools.partial, especially with methods needing internal state. It creates a state-holding object for robust operations.
This is beneficial in large projects with complex logic. Using __call__ improves organization and maintainability, leading to efficient development.
#Python #Programming #SoftwareDevelopment #Coding #DunderMethods #ObjectOrientedProgramming #FunctionalProgramming #CodeReadability #SoftwareEngineering #OpenToWork #getfedihired
-
Simplicity First: The Hidden Power of Readable Code in Software Development
In the fast-paced world of coding, the importance of readable code is often overshadowed by the allure of complexity. Yet, as developers spend more time reading than writing, the true value of simplic...
#news #tech #SoftwareDevelopment #CodeReadability #CleanCode
-
Today, I finished my #bachlorthesis in #ComputerScience with the title "Improving code readability in practice: Focus on the change".
You can find the abstract and the thesis as PDF file at #Codeberg
https://codeberg.org/BaumiCoder/Bachelorthesis/releases
The repository itself contains the replication package for the experiment of the thesis. License information annotated with #REUSE#Bachelorarbeit #Informatik #SoftwareEngineering #CodeQuality #CodeReadability
-
Embracing Functional Programming in C# - CodeProject
An exploration of the benefits of **functional programming in C#**. The article covers code readability, efficiency, and unit testing. Other highlights are the challenges of state mutation, the importance of immutability, and the concept of pure functions.
#FunctionalProgramming #CSharp #CodeReadability #Immutability #PureFunctions #ErrorHandling #Programming
https://www.codeproject.com/Articles/5376714/Embracing-Functional-Programming-in-Csharp
-
Refactor your code with default lambda parameters - .NET Blog
This article concludes a series on C# 12, highlighting "default lambda parameters" which simplify expressing default values in lambdas, enhancing code readability and maintainability
#CSharp12 #Charp #CodeReadability #Maintainability #Programming #DotNet #SoftwareDevelopment
https://devblogs.microsoft.com/dotnet/refactor-your-code-with-default-lambda-parameters/
-
Etherscan Unveils Code Reader: An AI-Powered Tool That Aims to Bolster Smart Contract Analysis - After the Ethereum block explorer Etherscan introduced an advanced filter for bloc... - https://news.bitcoin.com/etherscan-unveils-code-reader-an-ai-powered-tool-that-aims-to-bolster-smart-contract-analysis/ #artificialintelligence #blockchainexploration #contractaddress #cryptocurrency #smartcontract #codereader #technology #etherscan #analysis #ethereum #news #tool #ai
-
@lauramh @templetongate @bookstodon I had not heard of Draft2digital; that looks like a pretty good service. Vellum does make some very pretty books.
But as an old geek I don't like the code it produces. I have only one eye I can read with, and it is none too good, so I edit nearly every ebook I read to make the code dead simple so I can expand it a LOT and still read it. Vellum and other tools make that tough. #Ebooks could be more #accessable, but many are not... -
Manage your Linux systems like a container!
I’ve got to tell you, I have not been so excited about a technology… probably since Containers. At Summit this year Red Hat announced the General Availability of Image Mode for RHEL. So I got to spend a week in Boston, explaining, over and over again, why that’s important.
See, Image mode is kind of a big deal. It takes container workflows, and applies it to your data center servers using a technology called bootc. This concept isn’t new exactly, this sort of technology has been applied to edge devices, and phones, and other appliances for years. But what we have now is a general purpose linux that you can update using a bootable container image. This changes things.
So think about a Linux system as you know it today. We’re calling that Package Mode now in order to avoid confusion. RHEL Package Mode is a Linux base, with a package manager, where you install and configure things, and then fight to keep those things from drifting pretty much from then until eternity. There’s a whole facet of the IT industry around mitigating that drift. Package and config management is a huge business! For good reason! Drift is what makes your routine 2AM maintenance into a panic attack when the database server doesn’t come back up.
So I talked a lot about Image Mode at Summit, but I have to admit, I hadn’t touched it yet! So Now that I’m back home, and my time is a little less all consumed by prep for the RHEL 10 release, and Summit deadlines, I decided to take some time and get hands on with this revolutionary thing.
Building a pipeline
So, I use Gitlab community edition as a repository for a few container builds I maintain. Some time back I managed to get the CI/CD pipelines working for my container builds. These were nothing fancy, but they work. I commit a change to the repository, and a job kicks off to rebuild the container, and push it into a registry. In some cases that’s just the internal Gitlab registry, in others its Docker Hub. I, of course, do it all with Podman. So when I decided to tackle Image Mode, I thought it would be best to just rip that band-aid right off and do it in Gitlab, and have the builds happen there. How hard could it be? I already had container builds running there!
So I made a repo, and copied my CI config from one of the container builds that just used podman and the local registry, and threw in a basic Containerfile that just sourced FROM the RHEL bootc base image, and then did a package install. Commit, sit back in my arrogance and wait for my image.
It failed. For reasons I still don’t fully understand, the container build uses fuse-overlayfs to do its build, and couldn’t in my runner’s podman in podman build container. I did some research, and luckily I have access to internal Red Hat knowledge, so I was able to bounce some ideas around and came up with a solution. Two things actually. My runner needed some config changes. Here, I’ll share them with you.
Here is my Runner config
[[runners]] name = "dind-container" url = "https://git.undrground.org" id = 3 token = "NoTokenForYou" token_obtained_at = somedatestamp token_expires_at = someotherdatestamp executor = "docker" environment = ["FF_NETWORK_PER_BUILD=1"] [runners.cache] MaxUploadedArchiveSize = 0 [runners.cache.s3] [runners.cache.gcs] [runners.cache.azure] [runners.docker] tls_verify = false image = "docker:git" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache"] shm_size = 0 network_mtu = 0The things I had to add were, first, privileged = true. This gives the container the access it needs to do its fusefs work. And the environment “FF_NETWORK_PER_BUILD=1”, which I believe tweaks the podman networking such that it fixed a DNS resolution problem I was having in my builds.
With that fixed, I was able to get builds working! I have two things to share that may help you if you are trying to do the same. First, another Red Hatter built a public example repo that will apparently “just work” if you use it as a base for your Image Mode CI/CD. It didn’t work for me, but I suspect that was more about my gitlab setup and less about the functionality of the example. You can find that example, Here. What I ended up doing was modify my existing podman CI file. That looks like this:
---image: registry.undrground.org/gangrif/podman-builder:latest#services:# - docker:dindbefore_script: - dnf -y install podman git subscription-manager buildah skopeo podman - subscription-manager register --org=${RHT_ORGID} --activationkey=${RHT_ACT_KEY} - subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms --enable rhel-9-for-x86_64-baseos-rpms - export REVISION=$(git rev-parse --short HEAD) - podman login --username gitlab-ci-token --password $CI_JOB_TOKEN $CI_REGISTRY - podman login --username $RHLOGIN --password "$RHPASS" registry.redhat.ioafter_script: - podman logout $CI_REGISTRY - subscription-manager unregisterstages: - buildcontainerize: stage: build script: . - podman build --secret id=creds,src=/run/containers/0/auth.json --build-arg GIT_HASH=$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest . - podman push $CI_REGISTRY_IMAGENow, this example contains no verification or validation, so I suggest you maybe look into the proper example linked externally. That one has a lot of testing included. Mine will improve with time. 😉
Registry Authentication for your build
Now, there’s a few things to note here. First, Notice that I am not just logging into my own registry, but registry.redhat.io. You register using your Red Hat login for the Red Hat private registry, and that’s where the bootc base images come from. I also use subscription-manager to register the build container to Red Hat’s CDN. That’s because the RHEL Image Mode build is building RHEL, and must be done using an entitled host in order to receive any updates or packages during the container build. This was something I had gotten stuck on for some time, its a little tough to wrap your head around. Once you do though, it makes sense.
Authenticating your bootc system with your registry, automatically
I am also passing the podman authentication token file into a podman secret at build time. This is important later. If your bootc images are stored in a registry that is not public, you will need to authenticate to that registry in order to pull your updated images after deployment. The easiest way to bake in that authentication is to simply take the authentication from the build host, and place it into the built image. There is some trickery that happens in your Containerfile to make this work. You can read more about this here.
Containerfile
So, I told you we build image mode like a container. I meant it. We literally write a Contanerfile, and source it from these special bootc images that are published by Red Hat. There are a few things you’ll want to think about when building a bootc Containerfile vs a standard application container. Things that you wouldn’t normally think about when building a normal container.
Content
First, RHEL is entitled software, that doesn’t change for RHEL Image Mode. This is pretty seemless if you are doing your build directly on an Entitled RHEL system. But if you’re in a ubi container like I am, you’ll need to subscribe the UBI container because the BootC build will depend on that entitlement to enable its own repositories. That is not true, however, for 3rd party public repositories. Those just get enabled right inside of the Containerfile. This sounds confusing, but it boils down to this. RHEL repository? Entitled by the build host, Other repository? Add it via the Containerfile. I add EPEL in my example below.
Users
Something else I don’t usually see done in a standard container is the addition of users. Remember this is going to be a full RHEL host at the other end, so you might need to add users. In my case I am adding a local “breakglass” user, because I am leveraging IdM for my identities. But if something goes wrong during the provisioning, i want a user I can login to the system with to troubleshoot. You can also come in later with other tools to add users. You can enable cloud-init and add them there, or if you are using the image builder tool I’ll talk about in a bit, you can give it a config.toml file to add users at that point.
Other Considerations
Other things that you’ll need to think about might be firewall rules, container registry authentication, and even the lack of an ENTRYPOINT or CMD. Because this system is expected to boot into a full OS, it is not going to run a single dedicated workload. Instead you’ll be enabling services like you would on a standard RHEL system, with systemctl.
My Containerfile
Now that we’re through all of that, let me show you what I ended up with as a Containerfile.
FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable EPEL, install updates, and install some packagesRUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpmRUN dnf -y updateRUN dnf -y install ipa-hcc-client rhc rhc-worker-playbook cloud-init && dnf clean all# This sets up automatic registration with Red Hat InsightsCOPY --chmod=0644 rhc-connect.service /usr/lib/systemd/system/rhc-connect.serviceCOPY .rhc_connect_credentials /etc/rhc/.rhc_connect_credentialsRUN systemctl enable rhc-connect && touch /etc/rhc/.run_rhc_connect_next_boot# This is my backdoor user, in case of IdM join failureRUN useradd breakglassRUN usermod -p '$6$s0m3pAssw0rDHasH' breakglassRUN groupmems -g wheel -a breakglass# This picks up that podman pull secret, and adds it to the build imageCOPY link-podman-credentials.conf /usr/lib/tmpfiles.d/link-podman-credentials.confRUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \ chmod 0600 /usr/lib/container-auth.json && \ ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json# This configures the bootc update timer to run at a time that I consider acceptableRUN mkdir -p /etc/systemd/system/bootc-fetch-apply-updates.timer.d/COPY weekly-timer.conf /etc/systemd/system/bootc-fetch-apply-updates.timer.d/weekly.confYou can see from my comments what’s going on in the various blocks in that Containerfile. My intention is to use this as a base RHEL system, and then make more derivative images based on this one. For instance, if I wanted a web server, I would base a new Containerfile on this image, and then add in a RUN dnf install httpd. Its important to note that you shouldn’t be installing packages on these deployed systems after they are up and running. Those installations should happen in the image. If you install a package on a running image mode system, that change will not be carried into the next image update on your system unless you then incorporate it into your bootable container image. This means that you will need to plan ahead, but it also means that tracking package drift in the future is a thing of the past!
In my case, the above mentioned CI automation, and this Containerfile worked in my Gitlab instance, with the above Runner modifications. The build job will take some time, a bootc image is much larger than the lightweight container images you are used to if you’ve been building application containers.
But what about turning that into a VM?
So I am covering but ONE method of getting this image deployed to an acutal system. You can use a myriad of different methods including Kickstart, writing an ISO, PXEBOOT, but what I am doing (because it suits my needs) is turning my image into a qcow2 file, which is a virtual disk image for use with Libvirt. If you’re familiar with Image Builder, the tool used to churn out tailored RHEL disk images, then this wont be a surprise. Theres a container that you can grab that just runs image builder, you give it a bootable container image, and it turns it into a qcow2! Ive cooked up a script that pulls my bootable container right from my registry, writes it to a qcow2, then immediately passes that to virt-install and builds a VM out of it!
In my case, it also uses cloud-init to set its hostname, auto registers, and connects to insights, and then uses a slick new tech preview feature that auto-joins my lab’s IdM domain through insights! Here is my script:
#!/bin/bashVMNAME=$1podman login --username my-gitlab-username -p 'gitlab-token' registry.undrground.orgpodman login --username my-redhat-login -p 'redhatpassword registry.redhat.iopodman pull registry.undrground.org/gangrif/rhel9-imagemode:latestsudo podman run \ --rm \ -it \ --privileged \ --pull=newer \ --security-opt label=type:unconfined_t \ -v $(pwd)/config.toml:/config.toml \ -v $(pwd)/output:/output \ -v /var/lib/containers/storage:/var/lib/containers/storage \ registry.redhat.io/rhel9/bootc-image-builder:latest \ --type qcow2 \ registry.undrground.org/gangrif/rhel9-imagemode:latestcat << EOF > $VMNAME.init#cloud-configfqdn: $VMNAME.idm.undrground.orgEOFmv $(pwd)/output/qcow2/disk.qcow2 /var/lib/libvirt/images/$VMNAME-disk0.qcow2virt-install \--name $VMNAME \--memory 4096 \--vcpus 2 \--os-variant rhel9-unknown \--import \--clock offset=localtime \--disk=/var/lib/libvirt/images/$VMNAME-disk0.qcow2 \-w bridge=bridge20-lab \--autoconsole none \--cloud-init user-data=$VMNAME.initThis, of course, can be improved, but as a proof of concept it works great! Ive build a few test systems and so far its working flawlessly! Now, when I wans to update my systems, I update the gitlab repository with the changes, and let the CI run. Then once it completes, all I do is run this script to make a new vm! The running vms -should- (i have not tested this yet) get the updated bootble container image from the registry on saturday at 3AM, and reboot if new changes are applied.
Wrapping it up
This is, i think, the thing we’ve been promised for years. Ever since the advent of the cloud when we were told that we should stop treating our servers like pets, but never really given a clear definition of how. Image Mode makes that promise a reality. I’m certain I’ll be sharing more as my Image Mode journey progresses. Thanks for reading!
#bootc #cloud #image #imageMode #linux #redHat #redHatEnterpriseLinux #rhel #services
-
#advent_of_code is only 12 days this year? Eric is looking to reduce their own stress which I totally understand.
At least it's still here
-
For the past year and a half, I've been using a #kindle #scribe for my daily journal. I really prefer a digital system than a paper one. I don't like the bulk of the paper I would generate, and unsure of its value anyways. I do like the ability of the scribe to generate text from my scribbles.
I have considered the #remarkable pro and the new mini one, but from what I hear, the text it generates isn't as good. I'd love to hear other people's thoughts between the two systems.
-
I've been working on my most recent game (Lambda Protocol), and finally getting the input controls and hex manipulation done enough to work on the gameplay. I worry that I still missing some more intuitive controls, but with all I added to make changing easier, once I have folks testing it I can make changes.
I've been having lots of fun with this. I hope to have something playable in a few months. #indigame #ΛΠ #libgdx
-
I've been working on my most recent game (Lambda Protocol), and finally getting the input controls and hex manipulation done enough to work on the gameplay. I worry that I still missing some more intuitive controls, but with all I added to make changing easier, once I have folks testing it I can make changes.
I've been having lots of fun with this. I hope to have something playable in a few months. #indigame #ΛΠ #libgdx
-
I've been working on my most recent game (Lambda Protocol), and finally getting the input controls and hex manipulation done enough to work on the gameplay. I worry that I still missing some more intuitive controls, but with all I added to make changing easier, once I have folks testing it I can make changes.
I've been having lots of fun with this. I hope to have something playable in a few months. #indigame #ΛΠ #libgdx
-
I don't know where to ask so I'm yelling into the fediVoid: Which is the 'favorites' button gone on posts in Mastodon? I can 'boost', reply, or use the three-dot menu button to bookmark. But I cannot for the life of me find the 'favorite' button. (I can see my list of favorites still, I just can no longer add to them)
-
Got my Pi02w, SD card with #pwnagotchi on it, new Sugar coming today, and a 2.13r2b waveshare. Just missing an i2c clock to add.
Replacing my pi4 with the proper sized zero finally... now that the supply chain is (mostly) fixed.
Of course, need a 3Dp to print a case. Just wish I had a place to keep one. :(
-
I've finished my #FeatureFlag #Java project, which I ported from what I used back when I worked on #Alexa (https://github.com/wolpert/feature-flag) It works well and I'll continue to support it.
But now I'm going back to working on my star runners game in #Godot ... I need more fun.
-
-
-
-
So #mitnick died. His early legacy shaped my own world. Never met him, but I sure knew of him. His stories, both the true ones and the fake ones, were so intertwined in how I learned about computing in my youth. When I was learning on the PDP-11 I was told about his hacks years earlier with DEC. That shaped my impression of what was possible, the good and the bad.
Died at 59. I do feel bad that he's gone.
-
I've been working #rust lambdas... #localstack and #cdk to boot... with #jetbrains #space ... instead of #github and the only comfort I have... is Makefiles and shell scripts.
I love all of it. But I'm learning so many different things at once. Okay, I used #cdk when I was at amazon... but they had such nice internal libraries. I'm forgetting simple things like setting up my IAM roles correctly. *sigh*