home.social

#9pfs — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #9pfs, aggregated by home.social.

  1. Aaand I’m back on #9pfs. The #virtiofs mount turned out to be unstable very quickly. All I/O on the mount started hanging just after 2 days, requiring hard reset of a VM. I guess I’ll try that again in 12 months 🤷‍♂️

  2. I've switched from #9pfs to #virtiofs mount for a VM I run under my #proxmox host at home and did a few #fio benchmarks.

    9pfs (before):

    seq read (1M, 1 thread) - 1950 MiB/s
    rand read (4K, 4 threads) - 213 MiB/s
    seq write (1M, 1 thread) - 468 MiB/s
    rand write (4K, 4 threads) - 80 MiB/s

    virtiofs (after):

    seq read (1M, 1 thread) - 5258 MiB/s
    rand read (4K, 4 threads) - 291 MiB/s
    seq write (1M, 1 thread) - 1074 MiB/s
    rand write (4K, 4 threads) - 84 MiB/s

    2.7x sequential read, 2.3x sequential write, 1.3x random read, and 1.05x random write.

    Pretty good improvements!

    Especially that this is for my media (movies/tv) mount where mostly sequential I/O is typically performed.

    Note the underlying hardware is Samsung QVO SATA SSD, which is rather slow SSD, and likely a bottleneck in those random read/write tests. I expect there would be much bigger differences on a fast NVMe drive.

  3. The July 16th, 2024 Jail/Zones Production User Call is up:

    youtu.be/1F836MFrIfY

    We did a #9pfs deep dive, discussed example #CVEs, got a #Jailer update and hacked on it, discussed #VxLAN over #WireGuard and #IPsec, and more!

    "Don't forget to slam those Like and Subscribe buttons."

    #FreeBSD #illumos

  4. qemu.org/2022/12/14/qemu-7-2-0

    #9pfs : Massive general performance improvement somewhere between factor 6 .. 12.

    Badly needed, tried 9p once, switched to nfs because of the really low performance. Will give it a try after update to 7.2.x

    #linux #qemu