#amazonlinux2023 — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #amazonlinux2023, aggregated by home.social.
-
Fucking #Amazon (#AWS)…
So, I'm in the midst of writing supplemental automation to take care of site-specific remediation of STIG-hardening guidance for RPM-based #linux distros. Across all of my employers' customers, we support Red Hat and Oracle Enterprise Linux 8 & 9, @[email protected] and @[email protected] 9, CentOS Stream 9 and Amazon Linux 2023.
One of the remediations I was working on, today, was one that exists for all of the above named distros. Generally, when I have the STIG-Viewer open (I still use the 2.x one because the 3.x one is hot garbage), I use the search capability to limit the displayed findings to just the ones I'm currently working on. I also tend to prefer to use the longest possible string to match for the rule-text. Doing so avoids displaying findings I'm not interested in (just because they each reference, say, "auditd").
At any rate, I type in the string:must encrypt the transfer of audit records
Four hits. I then copy the rule rule-title from the #RHEL finding. The view reduces to three hits. Since I'd reviewed the previous four hits and seen that there was one each for RHEL, #OEL, #AlmaLinux and #AmazonLinux, I was confused why typing the longer string had reduced my number of hits. Specifically, it had removed the #AmazonLinux2023 finding.
So, I returned to the shorter filter string. I compared the RHEL and #AL2023 rule text strings.
RHEL/OEL/Alma:must encrypt the transfer of audit records offloaded onto a different system or media from the system being audited via rsyslog
AL2023:must encrypt the transfer of audit records off-loaded onto a different system or media from the system being audited via rsyslog.
And all I could think was, "are you fucking kidding me???"
#RedHat
#Oracle
#RockyLinux -
I run this blog on a small Amazon Lightsail instance (1 GB RAM, 2 vCPU). Cheap!
Most of the time, it’s fine, but it has a bad habit of dying sometimes on a large image upload. The Jetpack app reports the upload as failing (offering to try again) but my site is completely unresponsive. I can resolve this in one of two ways: either I can still SSH into the box, and restarting PHP with
sudo systemctl restart php-fpmrecovers and I can try the upload again; or the instance isn’t responsive to SSH and I have to reboot it via the Lightsail console. The CPU utilization is elevated during this period, eating into the burstable zone.I have seen generally better performance since I updated
/etc/php-fpm.d/www.confto overridepm = staticandpm.max_children = 2, but something was obviously still wrong. I installedatopsince I often couldn’t get on the host to see what was happening right when it was stuck.Last week, while working on a new post, my instance got wedged in the same way while trying to upload a short video. From the JetPack activity log I could see it became unresponsive at 11:42 PM Seattle time; the instance and
atopuse UTC so I’d need to look at the minutes leading up to to0642in the weekly atop log. That looks like this:Well there’s my problem!
php-fpmspikes in CPU, and thenkswapd0pegs the CPU while the instance’s local disk goes wild with reads trying to keep up. One odd thing is while the memory is low, it doesn’t seem to have changed much for the twophp-fpmchildren. I guess I either need to tune the swappiness on my host or try to put a better clamp on PHP processing images? Need to investigate further.I’m sure the real solution is to not host my own LAMP stack, but where’s the fun in that?
https://blog.ultranurd.net/2024/07/21/investigating-failed-wordpress-uploads/
#amazonLinux2023 #apache #aws #debugging #httpd #lightsail #linux #phpFpm #wordpress