- Prerequisites
- Live Environment Setup
- Disk Preparation
- Redundant ESP with mdraid
- ZFS OS Pool Creation
- ZFS Filesystem Creation
- Ubuntu Installation
- ZFS Configuration
- ZFSBootMenu Setup
- Final Steps
- Appendix 1: Recovery Mode
- Appendix 2: Replacing a Faulted Drive
- Appendix 3: Unlocking Other Pools with the Same Key
Prerequisites
Hardware and network requirements needed before starting the installation.
- UEFI boot
- x86_64 architecture
- Network access
Live Environment Setup
Boot Ubuntu Live USB, configure remote access, and prepare the installation environment.
Download Ubuntu Desktop 24.04 Live image and boot in EFI mode.
Optional: Remote Access
passwd ubuntu
sudo apt update && sudo apt install --yes openssh-server
ip addr show
# ssh ubuntu@<ip-address>
Prepare Environment
gsettings set org.gnome.desktop.media-handling automount false
sudo -i
apt update
apt install --yes debootstrap gdisk zfsutils-linux
systemctl stop zed
Generate Host ID
zgenhostid -f
Disk Preparation
Identify target disks and completely wipe them for clean installation.
Identify Disks by ID
ls -la /dev/disk/by-id/ | grep -v part
lsblk -o NAME,SIZE,MODEL,SERIAL
Use /dev/disk/by-id/* paths - persistent across reboots unlike /dev/sdX.
Set Disk Variables
export OS_DISK1="/dev/disk/by-id/nvme-Force_MP510_1919820500012769305E"
export OS_DISK2="/dev/disk/by-id/nvme-WD_BLACK_SN770_250GB_2346FX400125"
Clear Disks
WARNING: Destroys all data. Verify disk variables before proceeding.
zpool labelclear -f $OS_DISK1 2>/dev/null || true
zpool labelclear -f $OS_DISK2 2>/dev/null || true
umount /boot/efi 2>/dev/null || true
mdadm --stop /dev/md127 2>/dev/null || true
mdadm --stop /dev/md/esp 2>/dev/null || true
mdadm --zero-superblock --force ${OS_DISK1}-part1 2>/dev/null || true
mdadm --zero-superblock --force ${OS_DISK2}-part1 2>/dev/null || true
wipefs -a $OS_DISK1
wipefs -a $OS_DISK2
blkdiscard -f $OS_DISK1 2>/dev/null || true
blkdiscard -f $OS_DISK2 2>/dev/null || true
sgdisk --zap-all $OS_DISK1
sgdisk --zap-all $OS_DISK2
lsblk -o NAME,SIZE,FSTYPE,LABEL $OS_DISK1 $OS_DISK2
Redundant ESP with mdraid
Create mirrored EFI System Partition using mdraid for boot redundancy.
Create EFI System Partitions
OS_DISKS="$OS_DISK1 $OS_DISK2"
for disk in $OS_DISKS; do
sgdisk -n "1:1m:+512m" -t "1:ef00" "$disk"
done
Create mdraid Array for ESP
mdadm --create --verbose --level 1 --metadata 1.0 --homehost any --raid-devices 2 /dev/md/esp \
${OS_DISK1}-part1 ${OS_DISK2}-part1
mdadm --assemble --scan
mdadm --detail --scan >> /etc/mdadm.conf
Metadata 1.0 writes RAID metadata to the end of the partition rather than the beginning. This is critical for ESP mirroring because UEFI firmware reads from the partition start expecting a valid FAT filesystem. With metadata at the end, each individual partition appears as a valid standalone EFI partition to the firmware, enabling the system to boot from either disk if one fails. Newer metadata formats (1.1, 1.2) write to the beginning and would prevent firmware from recognizing the partitions as bootable.
Create ZFS Partitions
for disk in $OS_DISKS; do
sgdisk -n "2:0:-8m" -t "2:bf00" "$disk"
done
partprobe
ZFS OS Pool Creation
Create encrypted ZFS mirror pool with optimal settings for SSDs.
Pool uses ashift=12 (4K sectors) for modern SSDs.
Prepare Encryption Key
echo 'password' > /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
Replace password with your desired encryption password. This password will be required every time you boot -
ZFSBootMenu will prompt you to enter it to unlock the encrypted pool before the system can start. Choose a strong
password and remember it, as losing it means permanent data loss.
Create Pool
zpool create -f \
-m none \
-O acltype=posixacl \
-o ashift=12 \
-O atime=off \
-o autotrim=on \
-o cachefile=none \
-O canmount=off \
-o compatibility=openzfs-2.2-linux \
-O compression=zstd \
-O dnodesize=auto \
-O encryption=aes-256-gcm \
-O keyformat=passphrase \
-O keylocation=file:///etc/zfs/zroot.key \
-O normalization=formD \
-O recordsize=16K \
-O relatime=off \
-O xattr=sa \
zroot mirror ${OS_DISK1}-part2 ${OS_DISK2}-part2
ZFS Filesystem Creation
Create ZFS datasets for root, system directories, and user data with optimized properties.
Root Filesystem
zfs create -o canmount=off -o mountpoint=none zroot/ROOT
zfs create -o canmount=noauto -o mountpoint=/ zroot/ROOT/ubuntu
Keystore
zfs create -o mountpoint=/etc/zfs/keys zroot/keystore
System Directories
zfs create -o mountpoint=/var zroot/ROOT/ubuntu/var
zfs create -o mountpoint=/var/cache -o recordsize=128K -o sync=disabled \
zroot/ROOT/ubuntu/var/cache
zfs create -o mountpoint=/var/lib -o recordsize=8K zroot/ROOT/ubuntu/var/lib
zfs create -o mountpoint=/var/log -o recordsize=128K -o logbias=throughput \
zroot/ROOT/ubuntu/var/log
zfs create -o mountpoint=/tmp -o recordsize=32K -o compression=lz4 -o devices=off -o exec=off \
-o setuid=off -o sync=disabled zroot/ROOT/ubuntu/tmp
zfs create -o mountpoint=/var/tmp -o recordsize=32K -o compression=lz4 -o devices=off -o exec=off \
-o setuid=off -o sync=disabled zroot/ROOT/ubuntu/var/tmp
User Data
zfs create -o mountpoint=/home -o recordsize=128K zroot/USERDATA
zfs create -o mountpoint=/root zroot/USERDATA/root
zfs create zroot/USERDATA/user
Finalize and Mount
zpool set bootfs=zroot/ROOT/ubuntu zroot
zfs set keylocation=file:///etc/zfs/keys/zroot.key zroot
zfs set org.zfsbootmenu:keysource=zroot/keystore zroot
zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot
zfs mount zroot/ROOT/ubuntu
zfs mount zroot/keystore
zfs mount -a
udevadm trigger
ZFS Dataset Properties Overview
| Dataset | canmount | mountpoint | recordsize | compression | Additional Properties |
|---|---|---|---|---|---|
| zroot/ROOT | off | none | (inherited) | (inherited) | - |
| zroot/ROOT/ubuntu | noauto | / | (inherited) | (inherited) | - |
| zroot/keystore | (inherited) | /etc/zfs/keys | (inherited) | (inherited) | readonly=on |
| zroot/ROOT/ubuntu/var | (inherited) | /var | (inherited) | (inherited) | - |
| zroot/ROOT/ubuntu/var/cache | (inherited) | /var/cache | 128K | (inherited) | sync=disabled |
| zroot/ROOT/ubuntu/var/lib | (inherited) | /var/lib | 8K | (inherited) | - |
| zroot/ROOT/ubuntu/var/log | (inherited) | /var/log | 128K | (inherited) | logbias=throughput |
| zroot/ROOT/ubuntu/tmp | (inherited) | /tmp | 32K | lz4 | devices=off, exec=off, setuid=off, sync=disabled |
| zroot/ROOT/ubuntu/var/tmp | (inherited) | /var/tmp | 32K | lz4 | devices=off, exec=off, setuid=off, sync=disabled |
| zroot/USERDATA | (inherited) | /home | 128K | (inherited) | - |
| zroot/USERDATA/root | (inherited) | /root | (inherited) | (inherited) | - |
| zroot/USERDATA/user | (inherited) | /home/user | (inherited) | (inherited) | - |
Ubuntu Installation
Install Ubuntu base system, configure hostname, users, and network settings.
Install Base System
debootstrap noble /mnt
Copy Configuration Files
cp /etc/hostid /mnt/etc/hostid
cp /etc/mdadm.conf /mnt/etc/
cp /etc/resolv.conf /mnt/etc/resolv.conf
cp /etc/zfs/zroot.key /mnt/etc/zfs/keys/zroot.key
Chroot into New System
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
chroot /mnt /bin/bash
Configure System
echo 'hostname' > /etc/hostname
echo -e '127.0.1.1\hostname' >> /etc/hosts
passwd
cat <<EOF > /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ noble main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-security main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ noble-backports main restricted universe multiverse
EOF
apt update
apt upgrade --yes
apt install --yes --no-install-recommends \
console-setup \
keyboard-configuration \
linux-generic-hwe-24.04 \
locales
dpkg-reconfigure locales tzdata keyboard-configuration console-setup
Create User Account
adduser user
cp -a /etc/skel/. /home/user
chown -R user:user /home/user
usermod -aG adm,plugdev,sudo user
Configure Network
cat > /etc/netplan/01-enp4s0.yaml << 'EOF'
network:
version: 2
renderer: networkd
ethernets:
enp4s0:
dhcp4: true
dhcp6: true
accept-ra: true
ipv6-privacy: false
EOF
chown 600 /etc/netplan/01-enp4s0.yaml
netplan apply
ZFS Configuration
Install ZFS packages, enable services, and configure encryption keystore.
Install ZFS Packages
apt install --yes dosfstools mdadm zfs-initramfs zfsutils-linux
Enable Services
systemctl enable zfs.target
systemctl enable zfs-mount
systemctl enable zfs-import.target
Secure Keystore
zfs set readonly=on zroot/keystore
Configure Keystore Auto-Mount
To unlock other encrypted ZFS pools using the same key from zroot/keystore, configure systemd to ensure the keystore
is mounted before key-loading services run.
Set Keystore to Manual Mount
zfs set canmount=noauto zroot/keystore
This prevents ZFS from auto-mounting during pool import, avoiding a systemd race condition.
Create Keystore Mount Service
Create /etc/systemd/system/zfs-mount-keystore.service:
cat > /etc/systemd/system/zfs-mount-keystore.service << 'EOF'
[Unit]
Description=Mount ZFS keystore dataset zroot/keystore at /etc/zfs/keys
DefaultDependencies=no
Before=local-fs.target
Requires=zfs-import.target
After=zfs-import.target
[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs mount zroot/keystore
RemainAfterExit=yes
[Install]
WantedBy=local-fs.target
EOF
Enable the service:
systemctl daemon-reload
systemctl enable zfs-mount-keystore.service
Configure initramfs
echo "UMASK=0077" > /etc/initramfs-tools/conf.d/umask.conf
update-initramfs -c -k all
zfs set org.zfsbootmenu:commandline="quiet" zroot/ROOT
ZFSBootMenu Setup
Install and configure ZFSBootMenu bootloader with UEFI boot entries.
Format and Mount ESP
mkfs.vfat -F32 -nBOOT /dev/md/esp
mkdir -p /boot/efi
mount -t vfat /dev/md/esp /boot/efi/
Add ESP to fstab
cat << EOF >> /etc/fstab
$( blkid | grep BOOT | cut -d ' ' -f 4 ) /boot/efi vfat defaults 0 0
EOF
Install ZFSBootMenu
apt install --yes curl
mkdir -p /boot/efi/EFI/ZBM
curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp /boot/efi/EFI/ZBM/VMLINUZ.EFI /boot/efi/EFI/ZBM/VMLINUZ-BACKUP.EFI
Create UEFI Boot Entries
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
apt install --yes efibootmgr
efibootmgr -c -d "$OS_DISK2" -p 1 -L "ZBM 2 (Backup)" -l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'
efibootmgr -c -d "$OS_DISK2" -p 1 -L "ZBM 2" -l '\EFI\ZBM\VMLINUZ.EFI'
efibootmgr -c -d "$OS_DISK1" -p 1 -L "ZBM 1 (Backup)" -l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'
efibootmgr -c -d "$OS_DISK1" -p 1 -L "ZBM 1" -l '\EFI\ZBM\VMLINUZ.EFI'
efibootmgr -v
Final Steps
Unmount filesystems, reboot, and verify the installation.
Exit Chroot and Cleanup
exit
umount -n -R /mnt
zpool export zroot
Reboot
reboot
Post-Installation Verification
zpool status
zfs list
cat /proc/mdstat
ip addr show
ping -c3 google.com
systemctl status zfs-mount
Create Snapshot
tee /etc/apt/preferences.d/no-grub << 'EOF'
Package: grub* grub2*
Pin: release *
Pin-Priority: -1
EOF
GRUB is pinned with negative priority to prevent accidental installation. This system uses ZFSBootMenu as the bootloader, and installing GRUB would conflict with it by overwriting EFI boot entries and potentially breaking the boot process. Many Ubuntu packages and kernel updates attempt to install GRUB as a dependency, so this pin ensures the system remains GRUB-free.
zfs snapshot zroot/ROOT/ubuntu@fresh-install
zfs snapshot zroot/ROOT/ubuntu/var@fresh-install
zfs snapshot zroot/ROOT/ubuntu/var/lib@fresh-install
zfs list -t snapshot
Appendix 1: Recovery Mode
Boot from Live USB and access installed system for maintenance or recovery.
Recovery Steps
sudo -i
apt update
apt install --yes zfsutils-linux
cat /proc/mdstat
mdadm --run /dev/md127
mkdir -p /mnt/boot/efi/
mount -t vfat /dev/md/esp /mnt/boot/efi/
zpool export -a
zpool import -f -N -R /mnt zroot
zfs load-key -L prompt zroot
zfs mount zroot/ROOT/ubuntu
zfs mount zroot/keystore
zfs mount -a
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
chroot /mnt /bin/bash
Exit Recovery
exit
umount -n -R /mnt
zpool export zroot
reboot
Appendix 2: Replacing a Faulted Drive
Replace a failed drive in ZFS mirror and mdraid ESP array.
Check Status
zpool status zroot
cat /proc/mdstat
mdadm --run /dev/md127
mdadm --detail /dev/md/esp
Example faulted output:
pool: zroot
state: DEGRADED
config:
NAME STATE
zroot DEGRADED
mirror-0 DEGRADED
nvme-Force_MP510_1919820500012769305E-part2 ONLINE
nvme-WD_BLACK_SN770_250GB_2346FX400125-part2 FAULTED
Replace Drive
shutdown -h now
Replace drive, boot, then:
ls -la /dev/disk/by-id/ | grep -v part
lsblk -o NAME,SIZE,MODEL,SERIAL
export NEW_DISK="/dev/disk/by-id/nvme-NEW_DRIVE_SERIAL_HERE"
export WORKING_DISK="/dev/disk/by-id/nvme-Force_MP510_1919820500012769305E"
Partition New Drive
sgdisk --replicate=$NEW_DISK $WORKING_DISK
sgdisk --randomize-guids $NEW_DISK
lsblk $NEW_DISK
Replace in mdraid
mdadm --run /dev/md127
mdadm --manage /dev/md127 --add ${NEW_DISK}-part1
watch cat /proc/mdstat
Replace in ZFS Pool
export OLD_DISK="/dev/disk/by-id/nvme-WD_BLACK_SN770_250GB_2346FX400125"
zpool replace zroot ${OLD_DISK}-part2 ${NEW_DISK}-part2
watch zpool status zroot
Create Boot Entries
mount -t efivarfs efivarfs /sys/firmware/efi/efivars 2>/dev/null || true
efibootmgr -c -d "$NEW_DISK" -p 1 -L "ZBM NEW (Backup)" -l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'
efibootmgr -c -d "$NEW_DISK" -p 1 -L "ZBM NEW" -l '\EFI\ZBM\VMLINUZ.EFI'
efibootmgr -v
Verify
zpool status zroot
cat /proc/mdstat
mdadm --detail /dev/md/esp
zfs list
smartctl -a $NEW_DISK
efibootmgr -v
Expected output:
pool: zroot
state: ONLINE
config:
NAME STATE
zroot ONLINE
mirror-0 ONLINE
nvme-Force_MP510_1919820500012769305E-part2 ONLINE
nvme-NEW_DRIVE_SERIAL_HERE-part2 ONLINE
Appendix 3: Unlocking Other Pools with the Same Key
If you have additional encrypted ZFS pools (e.g., tank-data) that use the same key stored in zroot/keystore,
configure their key-load services to wait for the keystore mount.
Note: This configuration assumes you are using zfs-zed with the zfs-mount-generator for systemd integration,
which is the default setup on Ubuntu 24.04.
Configure Key-Load Dependencies
For each additional encrypted pool, create a systemd drop-in to add dependencies:
systemctl edit zfs-load-key@tank-data.service
Add these lines:
[Unit]
Requires=zfs-mount-keystore.service
After=zfs-mount-keystore.service
Save and reload:
systemctl daemon-reload
Boot Sequence
This ensures the correct boot order:
- Pool is imported (ZFSBootMenu has already decrypted
zroot) zfs-mount-keystore.servicemountszroot/keystoreat/etc/zfs/keyszfs-load-key@tank-data.servicestarts only after keystore is available/etc/zfs/keys/zroot.keyexists when the key-load service runs
This prevents race conditions where the key-load service attempts to access keys before the keystore dataset is mounted.