#sanoid — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #sanoid, aggregated by home.social.
-
Off the back of setting up a new VM last week (https://hachyderm.io/@forquare/116539562128910028) I know have Headscale [1] running with Headscale-admin [2].
Most of my machines now connected to the new VPN setup, I think I'll wait the weekend out to move do the final move from ZeroTier - I'm camping this weekend and will be less able to fix anything that goes wrong.
I love how easy lots of things are.
#Sanoid installed, configured, and run.
#Syncoid currently sending snapshots to https://zfs.rent/
Not to mention just how simple FreeBSD is to run.[1] https://headscale.net
[2] https://github.com/GoodiesHQ/headscale-admin -
@lakeswimmer I use #rsnapshot. Been using it for years and it works great. I'm also one of the project maintainers so am biased.
In a brand new installation where everything used #ZFS (and really, everything *should* use ZFS - if it's not properly supported in your OS the OS is faulty and you should use a different one) then these days I'd use zfs send/recv, wrapped in #sanoid / #syncoid - https://github.com/jimsalterjrs/sanoid
-
My ZFS snapshot and replication setup on Ubuntu ft. sanoid and syncoid
I have known about ZFS since 2009, when I was working for Sun Microsystems as a campus ambassador at my college. But it wasn’t until I started hearing Jim Salter (on the TechSNAP and 2.5 Admins podcasts) and Allan Jude (on the 2.5 Admins podcast) evangelize ZFS that I became interested in using it on my computers and servers. With Ubuntu shipping ZFS in the kernel for many years now, I had access to native ZFS!,
Here is an overview of my setup running Ubuntu + ZFS before I explain and document some of the details.
cube– A headless server running Ubuntu 24.04 LTS (at the time of writing) with ZFS on root and a lot of ZFS storage powered by mirrorvdevs. Hassanoidfor automatic snapshots.- Desktops and laptops in my home run (K)Ubuntu (24.04 or later; versions vary) with encrypted (ZFS native encryption) ZFS on root and ZFSBootMenu. These computers also use
sanoidfor automatic snapshots.
Sanoid configuration
On my personal computers, I use a minimal
sanoidconfiguration that looks like############# datasets #############[zroot] use_template = production recursive = zfs############## templates ##############[template_production] frequently = 0 hourly = 26 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes[template_ignore] autoprune = no autosnap = no monitor = noOn servers, the
sanoidconfiguration has some additional tweaks, like the following template to not snapshot replicated datasets.[template_backup] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 # don't take new snapshots - snapshots # on backup datasets are replicated in # from source, not generated locally autosnap = noPre-apt snapshots
While
sanoidprovides periodic ZFS snapshots, I also wanted to wrapapttransactions in ZFS snapshots for the ability to roll back any bad updates/upgrades. For this, I used the following shell script,#!/usr/bin/env bashDATE="$(/bin/date +%F-%T)"zfs snapshot -r zroot@snap_pre_apt_"$DATE"with the following
apthook in/etc/apt/apt.conf.d/90zfs-pre-apt-snapshot.// Takes a snapshot of the system before package changes.DPkg::Pre-Invoke {"[ -x /usr/local/sbin/zfs-pre-apt-snapshot ] && /usr/local/sbin/zfs-pre-apt-snapshot || true";};This handles taking snapshots before
apttransactions but doesn’t prune the snapshots at all. For that, I used thezfs-prune-snapshotsscript (from https://github.com/bahamas10/zfs-prune-snapshots) in a wrapper cron shell (schedule varies per computer) script that looks like#!/bin/sh/usr/local/sbin/zfs-prune-snapshots \ -p 'snap_pre_apt_' \ 1w 2>&1 | logger \ -t cleanup-zfs-pre-apt-snapshotsSnapshot replication
The
cubeserver has sufficient disk space to provide a replication target for all my other personal computers using ZFS. It has a pool nameddpool, which will be referenced in the details to follow.For automating snapshot replication, I chose to use
syncoidfrom the samesanoidpackage. To avoid giving privileged access to the sending and the receiving user accounts, my setup closely follows the path in https://klarasystems.com/articles/improving-replication-security-with-openzfs-delegation/.On my personal computer, I granted my unprivileged (but has
sudo🤷♂️) local user account theholdandsendpermissions on the root dataset,zroot.sudo zfs allow send-user hold,send zrootzfs allow zroot---- Permissions on zroot --------------------------------------------Local+Descendent permissions: user send-user hold,sendOn the
cubeserver, I created an unprivileged user (nosudopermissions here 😌) and granted it thecreate,mount,receivepermissions temporarily on the parent of the target dataset,dpool.Then I performed an initial full replication of a local snapshot by running the following commands as the unprivileged user.
zfs send \ zroot@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostnamezfs send \ zroot/ROOT@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOTzfs send \ zroot/ROOT/os-name@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOT/os-namezfs send \ zroot/home@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/homeThe
-uflag in thezfs receivecommands above will prevent it from trying to mount the remote dataset. The target remote dataset must not exist when running this initial full replication.As it is not a good practice to allow unprivileged users to mount filesystems, I disabled automatic mounting by running
zfs set mountpoint=none dpool/local-hostnameas the
sudouser on the target server.Then I narrowed down the permissions of the receiving user to only its own dataset by running
zfs unallow remote-user \ create,mount,receive dpoolzfs allow remote-user \ create,mount,receive dpool/local-hostnameon the target server.
Next, I tried to test the snapshot replication by running
syncoidmanually likesyncoid -r \ --no-privilege-elevation \ --no-sync-snap \ zroot \ remote-user@cube:dpool/local-hostnameand it replicated all the other snapshots all on the local datasets (we had only replicated one snapshot previously).
The
sanoidpackage in Debian and Ubuntu does not ship with a systemd timer forsyncoid. So I created a user service and a timer that look like the following examples.# ~/.config/systemd/user/syncoid.service[Unit]Description=Replicate sanoid snapshots[Service]Type=oneshotExecStart=/usr/sbin/syncoid -r --no-privilege-elevation --no-sync-snap zroot remote-user@cube:dpool/local-hostname# ~/.config/systemd/user/syncoid.timer[Unit]Description=Run Syncoid to replicate ZFS snapshots to cube[Timer]OnCalendar=*:0/15Persistent=true[Install]WantedBy=timers.targetThen I reloaded systemd, enabled and started the above timer to have everything working smoothly.
-
My ZFS snapshot and replication setup on Ubuntu ft. sanoid and syncoid
I have known about ZFS since 2009, when I was working for Sun Microsystems as a campus ambassador at my college. But it wasn’t until I started hearing Jim Salter (on the TechSNAP and 2.5 Admins podcasts) and Allan Jude (on the 2.5 Admins podcast) evangelize ZFS that I became interested in using it on my computers and servers. With Ubuntu shipping ZFS in the kernel for many years now, I had access to native ZFS!,
Here is an overview of my setup running Ubuntu + ZFS before I explain and document some of the details.
cube– A headless server running Ubuntu 24.04 LTS (at the time of writing) with ZFS on root and a lot of ZFS storage powered by mirrorvdevs. Hassanoidfor automatic snapshots.- Desktops and laptops in my home run (K)Ubuntu (24.04 or later; versions vary) with encrypted (ZFS native encryption) ZFS on root and ZFSBootMenu. These computers also use
sanoidfor automatic snapshots.
Sanoid configuration
On my personal computers, I use a minimal
sanoidconfiguration that looks like############# datasets #############[zroot] use_template = production recursive = zfs############## templates ##############[template_production] frequently = 0 hourly = 26 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes[template_ignore] autoprune = no autosnap = no monitor = noOn servers, the
sanoidconfiguration has some additional tweaks, like the following template to not snapshot replicated datasets.[template_backup] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 # don't take new snapshots - snapshots # on backup datasets are replicated in # from source, not generated locally autosnap = noPre-apt snapshots
While
sanoidprovides periodic ZFS snapshots, I also wanted to wrapapttransactions in ZFS snapshots for the ability to roll back any bad updates/upgrades. For this, I used the following shell script,#!/usr/bin/env bashDATE="$(/bin/date +%F-%T)"zfs snapshot -r zroot@snap_pre_apt_"$DATE"with the following
apthook in/etc/apt/apt.conf.d/90zfs-pre-apt-snapshot.// Takes a snapshot of the system before package changes.DPkg::Pre-Invoke {"[ -x /usr/local/sbin/zfs-pre-apt-snapshot ] && /usr/local/sbin/zfs-pre-apt-snapshot || true";};This handles taking snapshots before
apttransactions but doesn’t prune the snapshots at all. For that, I used thezfs-prune-snapshotsscript (from https://github.com/bahamas10/zfs-prune-snapshots) in a wrapper cron shell (schedule varies per computer) script that looks like#!/bin/sh/usr/local/sbin/zfs-prune-snapshots \ -p 'snap_pre_apt_' \ 1w 2>&1 | logger \ -t cleanup-zfs-pre-apt-snapshotsSnapshot replication
The
cubeserver has sufficient disk space to provide a replication target for all my other personal computers using ZFS. It has a pool nameddpool, which will be referenced in the details to follow.For automating snapshot replication, I chose to use
syncoidfrom the samesanoidpackage. To avoid giving privileged access to the sending and the receiving user accounts, my setup closely follows the path in https://klarasystems.com/articles/improving-replication-security-with-openzfs-delegation/.On my personal computer, I granted my unprivileged (but has
sudo🤷♂️) local user account theholdandsendpermissions on the root dataset,zroot.sudo zfs allow send-user hold,send zrootzfs allow zroot---- Permissions on zroot --------------------------------------------Local+Descendent permissions: user send-user hold,sendOn the
cubeserver, I created an unprivileged user (nosudopermissions here 😌) and granted it thecreate,mount,receivepermissions temporarily on the parent of the target dataset,dpool.Then I performed an initial full replication of a local snapshot by running the following commands as the unprivileged user.
zfs send \ zroot@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostnamezfs send \ zroot/ROOT@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOTzfs send \ zroot/ROOT/os-name@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOT/os-namezfs send \ zroot/home@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/homeThe
-uflag in thezfs receivecommands above will prevent it from trying to mount the remote dataset. The target remote dataset must not exist when running this initial full replication.As it is not a good practice to allow unprivileged users to mount filesystems, I disabled automatic mounting by running
zfs set mountpoint=none dpool/local-hostnameas the
sudouser on the target server.Then I narrowed down the permissions of the receiving user to only its own dataset by running
zfs unallow remote-user \ create,mount,receive dpoolzfs allow remote-user \ create,mount,receive dpool/local-hostnameon the target server.
Next, I tried to test the snapshot replication by running
syncoidmanually likesyncoid -r \ --no-privilege-elevation \ --no-sync-snap \ zroot \ remote-user@cube:dpool/local-hostnameand it replicated all the other snapshots all on the local datasets (we had only replicated one snapshot previously).
The
sanoidpackage in Debian and Ubuntu does not ship with a systemd timer forsyncoid. So I created a user service and a timer that look like the following examples.# ~/.config/systemd/user/syncoid.service[Unit]Description=Replicate sanoid snapshots[Service]Type=oneshotExecStart=/usr/sbin/syncoid -r --no-privilege-elevation --no-sync-snap zroot remote-user@cube:dpool/local-hostname# ~/.config/systemd/user/syncoid.timer[Unit]Description=Run Syncoid to replicate ZFS snapshots to cube[Timer]OnCalendar=*:0/15Persistent=true[Install]WantedBy=timers.targetThen I reloaded systemd, enabled and started the above timer to have everything working smoothly.
-
My ZFS snapshot and replication setup on Ubuntu ft. sanoid and syncoid
I have known about ZFS since 2009, when I was working for Sun Microsystems as a campus ambassador at my college. But it wasn’t until I started hearing Jim Salter (on the TechSNAP and 2.5 Admins podcasts) and Allan Jude (on the 2.5 Admins podcast) evangelize ZFS that I became interested in using it on my computers and servers. With Ubuntu shipping ZFS in the kernel for many years now, I had access to native ZFS!,
Here is an overview of my setup running Ubuntu + ZFS before I explain and document some of the details.
cube– A headless server running Ubuntu 24.04 LTS (at the time of writing) with ZFS on root and a lot of ZFS storage powered by mirrorvdevs. Hassanoidfor automatic snapshots.- Desktops and laptops in my home run (K)Ubuntu (24.04 or later; versions vary) with encrypted (ZFS native encryption) ZFS on root and ZFSBootMenu. These computers also use
sanoidfor automatic snapshots.
Sanoid configuration
On my personal computers, I use a minimal
sanoidconfiguration that looks like############# datasets #############[zroot] use_template = production recursive = zfs############## templates ##############[template_production] frequently = 0 hourly = 26 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes[template_ignore] autoprune = no autosnap = no monitor = noOn servers, the
sanoidconfiguration has some additional tweaks, like the following template to not snapshot replicated datasets.[template_backup] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 # don't take new snapshots - snapshots # on backup datasets are replicated in # from source, not generated locally autosnap = noPre-apt snapshots
While
sanoidprovides periodic ZFS snapshots, I also wanted to wrapapttransactions in ZFS snapshots for the ability to roll back any bad updates/upgrades. For this, I used the following shell script,#!/usr/bin/env bashDATE="$(/bin/date +%F-%T)"zfs snapshot -r zroot@snap_pre_apt_"$DATE"with the following
apthook in/etc/apt/apt.conf.d/90zfs-pre-apt-snapshot.// Takes a snapshot of the system before package changes.DPkg::Pre-Invoke {"[ -x /usr/local/sbin/zfs-pre-apt-snapshot ] && /usr/local/sbin/zfs-pre-apt-snapshot || true";};This handles taking snapshots before
apttransactions but doesn’t prune the snapshots at all. For that, I used thezfs-prune-snapshotsscript (from https://github.com/bahamas10/zfs-prune-snapshots) in a wrapper cron shell (schedule varies per computer) script that looks like#!/bin/sh/usr/local/sbin/zfs-prune-snapshots \ -p 'snap_pre_apt_' \ 1w 2>&1 | logger \ -t cleanup-zfs-pre-apt-snapshotsSnapshot replication
The
cubeserver has sufficient disk space to provide a replication target for all my other personal computers using ZFS. It has a pool nameddpool, which will be referenced in the details to follow.For automating snapshot replication, I chose to use
syncoidfrom the samesanoidpackage. To avoid giving privileged access to the sending and the receiving user accounts, my setup closely follows the path in https://klarasystems.com/articles/improving-replication-security-with-openzfs-delegation/.On my personal computer, I granted my unprivileged (but has
sudo🤷♂️) local user account theholdandsendpermissions on the root dataset,zroot.sudo zfs allow send-user hold,send zrootzfs allow zroot---- Permissions on zroot --------------------------------------------Local+Descendent permissions: user send-user hold,sendOn the
cubeserver, I created an unprivileged user (nosudopermissions here 😌) and granted it thecreate,mount,receivepermissions temporarily on the parent of the target dataset,dpool.Then I performed an initial full replication of a local snapshot by running the following commands as the unprivileged user.
zfs send \ zroot@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostnamezfs send \ zroot/ROOT@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOTzfs send \ zroot/ROOT/os-name@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOT/os-namezfs send \ zroot/home@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/homeThe
-uflag in thezfs receivecommands above will prevent it from trying to mount the remote dataset. The target remote dataset must not exist when running this initial full replication.As it is not a good practice to allow unprivileged users to mount filesystems, I disabled automatic mounting by running
zfs set mountpoint=none dpool/local-hostnameas the
sudouser on the target server.Then I narrowed down the permissions of the receiving user to only its own dataset by running
zfs unallow remote-user \ create,mount,receive dpoolzfs allow remote-user \ create,mount,receive dpool/local-hostnameon the target server.
Next, I tried to test the snapshot replication by running
syncoidmanually likesyncoid -r \ --no-privilege-elevation \ --no-sync-snap \ zroot \ remote-user@cube:dpool/local-hostnameand it replicated all the other snapshots all on the local datasets (we had only replicated one snapshot previously).
The
sanoidpackage in Debian and Ubuntu does not ship with a systemd timer forsyncoid. So I created a user service and a timer that look like the following examples.# ~/.config/systemd/user/syncoid.service[Unit]Description=Replicate sanoid snapshots[Service]Type=oneshotExecStart=/usr/sbin/syncoid -r --no-privilege-elevation --no-sync-snap zroot remote-user@cube:dpool/local-hostname# ~/.config/systemd/user/syncoid.timer[Unit]Description=Run Syncoid to replicate ZFS snapshots to cube[Timer]OnCalendar=*:0/15Persistent=true[Install]WantedBy=timers.targetThen I reloaded systemd, enabled and started the above timer to have everything working smoothly.
-
My ZFS snapshot and replication setup on Ubuntu ft. sanoid and syncoid
I have known about ZFS since 2009, when I was working for Sun Microsystems as a campus ambassador at my college. But it wasn’t until I started hearing Jim Salter (on the TechSNAP and 2.5 Admins podcasts) and Allan Jude (on the 2.5 Admins podcast) evangelize ZFS that I became interested in using it on my computers and servers. With Ubuntu shipping ZFS in the kernel for many years now, I had access to native ZFS!,
Here is an overview of my setup running Ubuntu + ZFS before I explain and document some of the details.
cube– A headless server running Ubuntu 24.04 LTS (at the time of writing) with ZFS on root and a lot of ZFS storage powered by mirrorvdevs. Hassanoidfor automatic snapshots.- Desktops and laptops in my home run (K)Ubuntu (24.04 or later; versions vary) with encrypted (ZFS native encryption) ZFS on root and ZFSBootMenu. These computers also use
sanoidfor automatic snapshots.
Sanoid configuration
On my personal computers, I use a minimal
sanoidconfiguration that looks like############# datasets #############[zroot] use_template = production recursive = zfs############## templates ##############[template_production] frequently = 0 hourly = 26 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes[template_ignore] autoprune = no autosnap = no monitor = noOn servers, the
sanoidconfiguration has some additional tweaks, like the following template to not snapshot replicated datasets.[template_backup] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 # don't take new snapshots - snapshots # on backup datasets are replicated in # from source, not generated locally autosnap = noPre-apt snapshots
While
sanoidprovides periodic ZFS snapshots, I also wanted to wrapapttransactions in ZFS snapshots for the ability to roll back any bad updates/upgrades. For this, I used the following shell script,#!/usr/bin/env bashDATE="$(/bin/date +%F-%T)"zfs snapshot -r zroot@snap_pre_apt_"$DATE"with the following
apthook in/etc/apt/apt.conf.d/90zfs-pre-apt-snapshot.// Takes a snapshot of the system before package changes.DPkg::Pre-Invoke {"[ -x /usr/local/sbin/zfs-pre-apt-snapshot ] && /usr/local/sbin/zfs-pre-apt-snapshot || true";};This handles taking snapshots before
apttransactions but doesn’t prune the snapshots at all. For that, I used thezfs-prune-snapshotsscript (from https://github.com/bahamas10/zfs-prune-snapshots) in a wrapper cron shell (schedule varies per computer) script that looks like#!/bin/sh/usr/local/sbin/zfs-prune-snapshots \ -p 'snap_pre_apt_' \ 1w 2>&1 | logger \ -t cleanup-zfs-pre-apt-snapshotsSnapshot replication
The
cubeserver has sufficient disk space to provide a replication target for all my other personal computers using ZFS. It has a pool nameddpool, which will be referenced in the details to follow.For automating snapshot replication, I chose to use
syncoidfrom the samesanoidpackage. To avoid giving privileged access to the sending and the receiving user accounts, my setup closely follows the path in https://klarasystems.com/articles/improving-replication-security-with-openzfs-delegation/.On my personal computer, I granted my unprivileged (but has
sudo🤷♂️) local user account theholdandsendpermissions on the root dataset,zroot.sudo zfs allow send-user hold,send zrootzfs allow zroot---- Permissions on zroot --------------------------------------------Local+Descendent permissions: user send-user hold,sendOn the
cubeserver, I created an unprivileged user (nosudopermissions here 😌) and granted it thecreate,mount,receivepermissions temporarily on the parent of the target dataset,dpool.Then I performed an initial full replication of a local snapshot by running the following commands as the unprivileged user.
zfs send \ zroot@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostnamezfs send \ zroot/ROOT@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOTzfs send \ zroot/ROOT/os-name@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOT/os-namezfs send \ zroot/home@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/homeThe
-uflag in thezfs receivecommands above will prevent it from trying to mount the remote dataset. The target remote dataset must not exist when running this initial full replication.As it is not a good practice to allow unprivileged users to mount filesystems, I disabled automatic mounting by running
zfs set mountpoint=none dpool/local-hostnameas the
sudouser on the target server.Then I narrowed down the permissions of the receiving user to only its own dataset by running
zfs unallow remote-user \ create,mount,receive dpoolzfs allow remote-user \ create,mount,receive dpool/local-hostnameon the target server.
Next, I tried to test the snapshot replication by running
syncoidmanually likesyncoid -r \ --no-privilege-elevation \ --no-sync-snap \ zroot \ remote-user@cube:dpool/local-hostnameand it replicated all the other snapshots all on the local datasets (we had only replicated one snapshot previously).
The
sanoidpackage in Debian and Ubuntu does not ship with a systemd timer forsyncoid. So I created a user service and a timer that look like the following examples.# ~/.config/systemd/user/syncoid.service[Unit]Description=Replicate sanoid snapshots[Service]Type=oneshotExecStart=/usr/sbin/syncoid -r --no-privilege-elevation --no-sync-snap zroot remote-user@cube:dpool/local-hostname# ~/.config/systemd/user/syncoid.timer[Unit]Description=Run Syncoid to replicate ZFS snapshots to cube[Timer]OnCalendar=*:0/15Persistent=true[Install]WantedBy=timers.targetThen I reloaded systemd, enabled and started the above timer to have everything working smoothly.
-
My ZFS snapshot and replication setup on Ubuntu ft. sanoid and syncoid
I have known about ZFS since 2009, when I was working for Sun Microsystems as a campus ambassador at my college. But it wasn’t until I started hearing Jim Salter (on the TechSNAP and 2.5 Admins podcasts) and Allan Jude (on the 2.5 Admins podcast) evangelize ZFS that I became interested in using it on my computers and servers. With Ubuntu shipping ZFS in the kernel for many years now, I had access to native ZFS!,
Here is an overview of my setup running Ubuntu + ZFS before I explain and document some of the details.
cube– A headless server running Ubuntu 24.04 LTS (at the time of writing) with ZFS on root and a lot of ZFS storage powered by mirrorvdevs. Hassanoidfor automatic snapshots.- Desktops and laptops in my home run (K)Ubuntu (24.04 or later; versions vary) with encrypted (ZFS native encryption) ZFS on root and ZFSBootMenu. These computers also use
sanoidfor automatic snapshots.
Sanoid configuration
On my personal computers, I use a minimal
sanoidconfiguration that looks like############# datasets #############[zroot] use_template = production recursive = zfs############## templates ##############[template_production] frequently = 0 hourly = 26 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes[template_ignore] autoprune = no autosnap = no monitor = noOn servers, the
sanoidconfiguration has some additional tweaks, like the following template to not snapshot replicated datasets.[template_backup] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 # don't take new snapshots - snapshots # on backup datasets are replicated in # from source, not generated locally autosnap = noPre-apt snapshots
While
sanoidprovides periodic ZFS snapshots, I also wanted to wrapapttransactions in ZFS snapshots for the ability to roll back any bad updates/upgrades. For this, I used the following shell script,#!/usr/bin/env bashDATE="$(/bin/date +%F-%T)"zfs snapshot -r zroot@snap_pre_apt_"$DATE"with the following
apthook in/etc/apt/apt.conf.d/90zfs-pre-apt-snapshot.// Takes a snapshot of the system before package changes.DPkg::Pre-Invoke {"[ -x /usr/local/sbin/zfs-pre-apt-snapshot ] && /usr/local/sbin/zfs-pre-apt-snapshot || true";};This handles taking snapshots before
apttransactions but doesn’t prune the snapshots at all. For that, I used thezfs-prune-snapshotsscript (from https://github.com/bahamas10/zfs-prune-snapshots) in a wrapper cron shell (schedule varies per computer) script that looks like#!/bin/sh/usr/local/sbin/zfs-prune-snapshots \ -p 'snap_pre_apt_' \ 1w 2>&1 | logger \ -t cleanup-zfs-pre-apt-snapshotsSnapshot replication
The
cubeserver has sufficient disk space to provide a replication target for all my other personal computers using ZFS. It has a pool nameddpool, which will be referenced in the details to follow.For automating snapshot replication, I chose to use
syncoidfrom the samesanoidpackage. To avoid giving privileged access to the sending and the receiving user accounts, my setup closely follows the path in https://klarasystems.com/articles/improving-replication-security-with-openzfs-delegation/.On my personal computer, I granted my unprivileged (but has
sudo🤷♂️) local user account theholdandsendpermissions on the root dataset,zroot.sudo zfs allow send-user hold,send zrootzfs allow zroot---- Permissions on zroot --------------------------------------------Local+Descendent permissions: user send-user hold,sendOn the
cubeserver, I created an unprivileged user (nosudopermissions here 😌) and granted it thecreate,mount,receivepermissions temporarily on the parent of the target dataset,dpool.Then I performed an initial full replication of a local snapshot by running the following commands as the unprivileged user.
zfs send \ zroot@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostnamezfs send \ zroot/ROOT@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOTzfs send \ zroot/ROOT/os-name@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/ROOT/os-namezfs send \ zroot/home@snapshot-name | ssh \ remote-user@cube \ zfs receive -u \ dpool/local-hostname/homeThe
-uflag in thezfs receivecommands above will prevent it from trying to mount the remote dataset. The target remote dataset must not exist when running this initial full replication.As it is not a good practice to allow unprivileged users to mount filesystems, I disabled automatic mounting by running
zfs set mountpoint=none dpool/local-hostnameas the
sudouser on the target server.Then I narrowed down the permissions of the receiving user to only its own dataset by running
zfs unallow remote-user \ create,mount,receive dpoolzfs allow remote-user \ create,mount,receive dpool/local-hostnameon the target server.
Next, I tried to test the snapshot replication by running
syncoidmanually likesyncoid -r \ --no-privilege-elevation \ --no-sync-snap \ zroot \ remote-user@cube:dpool/local-hostnameand it replicated all the other snapshots all on the local datasets (we had only replicated one snapshot previously).
The
sanoidpackage in Debian and Ubuntu does not ship with a systemd timer forsyncoid. So I created a user service and a timer that look like the following examples.# ~/.config/systemd/user/syncoid.service[Unit]Description=Replicate sanoid snapshots[Service]Type=oneshotExecStart=/usr/sbin/syncoid -r --no-privilege-elevation --no-sync-snap zroot remote-user@cube:dpool/local-hostname# ~/.config/systemd/user/syncoid.timer[Unit]Description=Run Syncoid to replicate ZFS snapshots to cube[Timer]OnCalendar=*:0/15Persistent=true[Install]WantedBy=timers.targetThen I reloaded systemd, enabled and started the above timer to have everything working smoothly.
-
Would I be crazy to enable deduplication on the ZFS dataset for a Postgres database? It seems like it would massively reduce the size of snapshots, which I replicate remotely using send / receive (they’re encrypted too).
The database in question is <20GB, and the server has 128GB RAM (plenty free). ZFS is on the (Proxmox) host, Postgres is in an LXC. Snapshots with Sanoid.
#zfs #postgres #postgresql #deduplication #homelab #selfhosted #database #proxmox #lxc #sanoid