#frr — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #frr, aggregated by home.social.
-
Aviation weather for Front Royal Warren County airport (USA) is “METAR KFRR 091335Z AUTO 00000KT 10SM SCT110 16/06 A2984 RMK AO2 T01570064” : See what it means on https://www.bigorre.org/aero/meteo/kfrr/en #frontroyalwarrencountyairport #airport #frontroyal #usa #kfrr #frr #metar #aviation #aviationweather #avgeek vl
-
https://bgp.tools/as/201379#connectivity
6 Upstreams, 105 direct peers 🙂
My AS201379 / 2a06:9801:1c::/48 has some really solid European connectivity now.
Entirely powered by FreeBSD and FRRouting, no proprietary routers and no legacy-IP (formerly known as IPv4).
-
New post: Part 4 of running my own AS.
A direct BGP session with Hetzner on FogIXP, a fourth FreeBSD edge in Zürich, and a MikroTik at home speaking iBGP into the /48 - so my home LAN now has provider-independent IPv6 and exits AS201379 like any other site.
Plus a two-condition route-map that steers DTAG-bound traffic over Vultr.
-
Successful surgery on my BGP core for AS201379 done.
Got a third edge-router online, reorganized local-preferences and optimizing packet flows for efficiency and cost.
Monitoring with Grafana is perfectly fine. RIPE Atlas measurement looks amazing for central Europe 🙂
Great connectivity for my 2a06:9801:1c::/48
Looking Glass: https://lg.hofstede.it
Peering information: https://hofstede.it/as201379.html -
Built some Grafana dashboard for my AS201379 BGP core-router (my two edge-routers will follow) 🙂
Using node_exporter for FreeBSD and frr_exporter to get insights from the routing fabric.
Nice!
#grafana #networking #routing #ipb6 #freebsd #frr #monitoring
-
Artemis II Flight Readiness Review Certification Signing 🌑🚀🧑🚀🪐
#ArtemisII #EGS #ExplorationGroundSystems #FRR #FlightReadinessReview #JaredIsaacman #KSC #KennedySpaceCenter
⏩ 3 new pictures from NASA (Image Library) https://commons.wikimedia.org/wiki/Special:ListFiles?limit=3&user=OptimusPrimeBot&ilshowall=1&offset=20260314010352
-
Artemis II Flight Readiness Review 🌑🚀🧑🚀🪐
#Artemis #ArtemisII #EGS #ExplorationGroundSystems #FRR #FlightReadinessReview #JaredIsaacman #KSC
⏩ 17 new pictures from NASA (Image Library) https://commons.wikimedia.org/wiki/Special:ListFiles?limit=17&user=OptimusPrimeBot&ilshowall=1&offset=20260313010327
-
I just published a new guide on evolving a single BGP router into a multi-homed, two-PoP network using FreeBSD, FRR, and PF.
- Native peering on Vultr + 3 GRE transits
- Tying it together with iBGP
- Why stateful firewalls break asymmetric transit (and how to fix it)All for ~€18/mo.
Read it here:
https://blog.hofstede.it/running-your-own-as-going-multi-homed-with-ibgp-and-three-transits/ -
Cyber@cteurs
Cyberaction : Fonds de réserve pour les retraites : pas d’argent public pour les fossiles
https://mcinformactions.net/cyberaction-fonds-de-reserve-pour-les-retraites-pas-d-argent-public-pour-les
#FRR #energiesfossiles -
IPv6 should be treated the same as, if not better than, IPv4!
https://github.com/FRRouting/frr/issues/20067
https://github.com/FRRouting/frr/issues/20102 -
and here the RFO for our 5-10 minute of problems today:
-
Oh look, it's an "Ubuntu 24.04 pushed out a systemd-networkd update and broke things" week again.
Did you know that #systemd-networkd *really* dislikes it when anything else tries to manage the system's route table, and when it restarts it will frequently remove a bunch of routes that were added by dynamic routing, #K8S CNI, or other similar tools?
This has caused a number of fairly public outages for sites over the past year or two. There are a set of config flags for /etc/systemd/networkd.conf that will disable this behavior, but the last of them went in in Systemd v256, and #Ubuntu 24.04 (LTS) uses v255.
I have a writeup with some workarounds: https://scottstuff.net/posts/2025/02/25/frr-vs-systemd-networkd/
In my case, I tend to lose a bunch of either IPv4 or IPv6 (but rarely both) routes when `networkd` updates roll out on their own. My first indication of trouble is usually a ping-check alert firing.
My specific problem is caused by #FRR; it defaults to creating its own kernel next-hop groups for routes learned from OSPF or BGP, and `systemd-networkd` then comes along and cleans them up, which causes the kernel to drop all of the routes that were associated with them. FRR fails to re-add the routes until it's restarted, so things stay broken until they're manually cleaned up.
I've been testing a fix on ~half of my machines since July to see if it caused any *other* problems, and it seems to have passed. Adding `no zebra nexthop kernel enable` to FRR seems to avoid the nexthop problem without causing any other issues for me. This *may* break if you use policy-based routing with FRR, but it's probably safe otherwise.
-
Анонсируем Full-View на изолированном стенде
Мы разрабатываем сервис, который поможет сетевикам управлять аплинками. В процессе подготовки стенда для тестирования потребовалось в изолированном окружении развернуть виртуальный маршрутизатор и анонсировать на него несколько Full-View.
-
FRRouting. Без кликбейтного названия
Вот есть у нас, например, два хоста-клиента и есть один Linux. Хосты из одной сети. Надо чтобы пообщаться смогли они друг с дружкой...
-
#frr weirdness: ospf6d accumulated a ton of duplicate routes for 2 peering links and an anycast subnet around 66-68 days ago. there's 13.4k entries for the anycasted /64 and 5.6k/5.8k for the two /127s
none of the other routes are unusual -
@rachel I have been in my software router era for a very long time with #VyOS then #openwrt then back to #VyOS tried to sprinkle #frr around there too. It just took a little bit of cognitive load when updating the host or the software I'm using as my virtualization layer.. Now! I just been on this router for like 4 days.. I might be full of shit 😂😂😂
-
I am confused by FRR.
https://docs.frrouting.org/en/latest/bgp.html
"-l, --listenon - Specify specific IP addresses for bgpd to listen on[...] Note that this option implies the –no_kernel option, and no learned routes will be installed into the linux kernel."
But they are installed into the kernel table! I can see it (with
ip -6 route show table 50).Is this because it's an bgpd option, and the routes go via zebra instead?
Config:
https://privatebin.net/?6461c22964ba20b8#BtwSKdrrRPu2tx8DQwfqyBaDU2gjdD5MqSYTUejXBTMq
and
https://privatebin.net/?34bd84b23b49a596#AVw68P7USz458c8hgk91GZ9dBmJZCADEw2Mq2gYtBw7yAnd even if it did. what if I want to run multiple instances of bgpd and to install the routes into the kernel?
edit:
The docs seem to be wrong. Adding "-n", the routes are indeed not installed in the routing table.
-
I got tired of fussing with #FRR and trying to use it to do VRF's and routing as it really is geared towards Dynamic Routing! Back to #VyOS for the #homelab. I am still using VRRP with 3 FRR routers on my #incus hosts. So as long as my upstream router is up. I can reboot hosts all day long and no internet go down.
-
VyOS 1.5 is somehow broken for me: Babel does not work at all (not even with the example from the documentation) and with BGP you cannot set a source address for installed routes (possible with route-map set src, but has no effect).
The rolling release seems to be enormously rolling.
-
Uhm, the other day I change my #Ceph setup that way, that I now use #FRR & #OSPF to utilize both links in my 3-node setup. So each node has a dedicated 10 GbE link to its neighbours, forming a ring (or triangle).
Then I also changed my setup and added a SSD as my WAL/DB device for my OSDs (2 TB SSD split into 2x 1 TB for WAL/DB, one for each OSD).
Although I see a slight speed increase in recovery speed, the increase is still below the expected performance: only 100-150 MB/s in most cases with 2x Seagate Exos X18 18 TB drives. Each drive can deliver up to 270 MB/s, according to technical datasheet.
So, I would expect to see more like 200-400 MB/s.
MTU is 9000 bytes .ceph -s reports:
recovery: 182 MiB/s, 8 keys/s, 144 objects/sAny ideas how to increase the speed?
-
So I decided to spin up #frr on my home testing setup again (recently switched to bird2) and oh well...
-
When we implemented live log display in #FRR|'s CLI client, it was an amazing feature but I didn't think much of it. Now I've seen #strongSwan|'s swanctl do even better (it seems to explicitly show logs related to the SA you're dealing with), and it's driving me to a conclusion:
All long-running daemons with a control interface need to do this. It's a basic UX requirement.
And this shouldn't just be "tail -f" on the log, it should show context-relevant low priority messages disabled elsewhere.