home.social

#frr — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #frr, aggregated by home.social.

  1. Aviation weather for Front Royal Warren County airport (USA) is “METAR KFRR 091335Z AUTO 00000KT 10SM SCT110 16/06 A2984 RMK AO2 T01570064” : See what it means on bigorre.org/aero/meteo/kfrr/en #frontroyalwarrencountyairport #airport #frontroyal #usa #kfrr #frr #metar #aviation #aviationweather #avgeek vl

  2. bgp.tools/as/201379#connectivi

    6 Upstreams, 105 direct peers 🙂

    My AS201379 / 2a06:9801:1c::/48 has some really solid European connectivity now.

    Entirely powered by FreeBSD and FRRouting, no proprietary routers and no legacy-IP (formerly known as IPv4).

    #bgp #networking #bgp #freebsd #frr #ipv6

  3. The more I learn about #Linux #FRR, the more I am excited to try it out. I just need to find some other friends interested in experimentation.

  4. New post: Part 4 of running my own AS.

    A direct BGP session with Hetzner on FogIXP, a fourth FreeBSD edge in Zürich, and a MikroTik at home speaking iBGP into the /48 - so my home LAN now has provider-independent IPv6 and exits AS201379 like any other site.

    Plus a two-condition route-map that steers DTAG-bound traffic over Vultr.

    blog.hofstede.it/running-your-

    #BGP #FreeBSD #IPv6 #FRR #MikroTik #Networking

  5. Successful surgery on my BGP core for AS201379 done.

    Got a third edge-router online, reorganized local-preferences and optimizing packet flows for efficiency and cost.

    Monitoring with Grafana is perfectly fine. RIPE Atlas measurement looks amazing for central Europe 🙂

    Great connectivity for my 2a06:9801:1c::/48

    Looking Glass: lg.hofstede.it
    Peering information: hofstede.it/as201379.html

    #networking #bgp #ipv6 #freebsd #frr #ripe

  6. Built some Grafana dashboard for my AS201379 BGP core-router (my two edge-routers will follow) 🙂

    Using node_exporter for FreeBSD and frr_exporter to get insights from the routing fabric.

    Nice!

    #grafana #networking #routing #ipb6 #freebsd #frr #monitoring

  7. I just published a new guide on evolving a single BGP router into a multi-homed, two-PoP network using FreeBSD, FRR, and PF.

    - Native peering on Vultr + 3 GRE transits
    - Tying it together with iBGP
    - Why stateful firewalls break asymmetric transit (and how to fix it)

    All for ~€18/mo.

    Read it here:
    blog.hofstede.it/running-your-

    #FreeBSD #BGP #IPv6 #Networking #Sysadmin #FRR #Homelab

  8. Oh look, it's an "Ubuntu 24.04 pushed out a systemd-networkd update and broke things" week again.

    Did you know that #systemd-networkd *really* dislikes it when anything else tries to manage the system's route table, and when it restarts it will frequently remove a bunch of routes that were added by dynamic routing, #K8S CNI, or other similar tools?

    This has caused a number of fairly public outages for sites over the past year or two. There are a set of config flags for /etc/systemd/networkd.conf that will disable this behavior, but the last of them went in in Systemd v256, and #Ubuntu 24.04 (LTS) uses v255.

    I have a writeup with some workarounds: scottstuff.net/posts/2025/02/2

    In my case, I tend to lose a bunch of either IPv4 or IPv6 (but rarely both) routes when `networkd` updates roll out on their own. My first indication of trouble is usually a ping-check alert firing.

    My specific problem is caused by #FRR; it defaults to creating its own kernel next-hop groups for routes learned from OSPF or BGP, and `systemd-networkd` then comes along and cleans them up, which causes the kernel to drop all of the routes that were associated with them. FRR fails to re-add the routes until it's restarted, so things stay broken until they're manually cleaned up.

    I've been testing a fix on ~half of my machines since July to see if it caused any *other* problems, and it seems to have passed. Adding `no zebra nexthop kernel enable` to FRR seems to avoid the nexthop problem without causing any other issues for me. This *may* break if you use policy-based routing with FRR, but it's probably safe otherwise.

  9. Анонсируем Full-View на изолированном стенде

    Мы разрабатываем сервис, который поможет сетевикам управлять аплинками. В процессе подготовки стенда для тестирования потребовалось в изолированном окружении развернуть виртуальный маршрутизатор и анонсировать на него несколько Full-View.

    habr.com/ru/articles/887732/

    #bgp #frr #linux #fullview #vmx

  10. FRRouting. Без кликбейтного названия

    Вот есть у нас, например, два хоста-клиента и есть один Linux. Хосты из одной сети. Надо чтобы пообщаться смогли они друг с дружкой...

    habr.com/ru/articles/884824/

    #FRR #маршрутизация #mpls #evpnvxlan

  11. #frr weirdness: ospf6d accumulated a ton of duplicate routes for 2 peering links and an anycast subnet around 66-68 days ago. there's 13.4k entries for the anycasted /64 and 5.6k/5.8k for the two /127s
    none of the other routes are unusual

  12. @rachel I have been in my software router era for a very long time with #VyOS then #openwrt then back to #VyOS tried to sprinkle #frr around there too. It just took a little bit of cognitive load when updating the host or the software I'm using as my virtualization layer.. Now! I just been on this router for like 4 days.. I might be full of shit 😂😂😂

  13. I am confused by FRR.

    docs.frrouting.org/en/latest/b

    "-l, --listenon - Specify specific IP addresses for bgpd to listen on[...] Note that this option implies the –no_kernel option, and no learned routes will be installed into the linux kernel."

    But they are installed into the kernel table! I can see it (with ip -6 route show table 50).

    Is this because it's an bgpd option, and the routes go via zebra instead?

    Config:
    privatebin.net/?6461c22964ba20
    and
    privatebin.net/?34bd84b23b49a5

    And even if it did. what if I want to run multiple instances of bgpd and to install the routes into the kernel?

    edit:

    The docs seem to be wrong. Adding "-n", the routes are indeed not installed in the routing table.

    #frr #frrouting

  14. I got tired of fussing with #FRR and trying to use it to do VRF's and routing as it really is geared towards Dynamic Routing! Back to #VyOS for the #homelab. I am still using VRRP with 3 FRR routers on my #incus hosts. So as long as my upstream router is up. I can reboot hosts all day long and no internet go down.

    #selfhosted #selfhosting

  15. Yo!! this shit kinda 🔥!! I'm using 2 lxc containers that are using the FRR project. I configured the #frr packages to use #VRRP for both lxc instances. I now have redundant gateways for the
    #homelab . I dropped one packet as it switch over!

    #selfhosted
    #selfhosting
    #incus

  16. VyOS 1.5 is somehow broken for me: Babel does not work at all (not even with the example from the documentation) and with BGP you cannot set a source address for installed routes (possible with route-map set src, but has no effect).

    The rolling release seems to be enormously rolling.

    #VyOS #Babel #FRR #BGP #Networking

  17. Uhm, the other day I change my #Ceph setup that way, that I now use #FRR & #OSPF to utilize both links in my 3-node setup. So each node has a dedicated 10 GbE link to its neighbours, forming a ring (or triangle).

    Then I also changed my setup and added a SSD as my WAL/DB device for my OSDs (2 TB SSD split into 2x 1 TB for WAL/DB, one for each OSD).

    Although I see a slight speed increase in recovery speed, the increase is still below the expected performance: only 100-150 MB/s in most cases with 2x Seagate Exos X18 18 TB drives. Each drive can deliver up to 270 MB/s, according to technical datasheet.

    So, I would expect to see more like 200-400 MB/s.
    MTU is 9000 bytes .

    ceph -s reports:
    recovery: 182 MiB/s, 8 keys/s, 144 objects/s

    Any ideas how to increase the speed?

  18. So I decided to spin up #frr on my home testing setup again (recently switched to bird2) and oh well...

  19. Hmmm, #pfSense with #Wireguard and #OSPF / #FRR seems to be a little more complex than thought. At least when you want to join an existing network based on #Debian an #Bird OSPF and Wireguard.

  20. When we implemented live log display in #FRR|'s CLI client, it was an amazing feature but I didn't think much of it. Now I've seen #strongSwan|'s swanctl do even better (it seems to explicitly show logs related to the SA you're dealing with), and it's driving me to a conclusion:

    All long-running daemons with a control interface need to do this. It's a basic UX requirement.

    And this shouldn't just be "tail -f" on the log, it should show context-relevant low priority messages disabled elsewhere.