Procedure for expanding a ZFS pool vdev on a KVM raw disk image where the guest disk layout is a single GPT partition (type BF00) spanning the entire virtual disk.
/dev/sda1 and the pool uses that partition as its sole vdev. If your VM has additional partitions (EFI System, BIOS boot, swap), adapt the partition number and exercise additional caution at the delete/recreate step. The example below adds 500 GB; adjust the qemu-img size argument as needed.Phase 0: Checks (Guest, before shutdown)
0.1 — Confirm the vdev device path
Verify that ZFS knows the vdev as /dev/sda1. If the pool was created using a /dev/disk/by-id/ symlink, you will need to use that path in the expansion step later.
sudo zpool status zfs-poolLook for the NAME column under the pool's vdev tree and record the exact device path shown.
0.2 — Record the current partition table
Print and save the partition layout. You will need the start sector of partition 1 during the resize; this record is also your recovery reference if anything goes wrong.
sudo gdisk -l /dev/sda | tee /root/sda-partition-table-before.txt0.3 — Take a ZFS snapshot
Create a recursive snapshot across all datasets in the pool. This is your last line of defence before partition surgery.
sudo zfs snapshot -r zfs-pool@pre-resizeOptional but recommended: send the snapshot to a remote host for safekeeping.
sudo zfs send -R zfs-pool@pre-resize | ssh backup-host zfs recv backup-pool/csirt0.4 — Record current pool capacity
Note the current size so you can confirm the expansion afterwards.
zfs list
zpool list zfs-poolPhase 1: Host-Side Operations
1.1 — Shut down the VM
virsh shutdown CSIRTWait for the VM to power off cleanly. Confirm with virsh list --all before proceeding.
1.2 — Resize the raw disk image
This operation extends the file; it does not touch any data already written.
sudo qemu-img resize /3tb-pool/libvirt/csirt.img +500G
sudo qemu-img info /3tb-pool/libvirt/csirt.img1.3 — Start the VM
virsh start CSIRTPhase 2: Guest-Side Operations
2.1 — Export the ZFS pool
Disconnect the pool from the kernel so that no process holds /dev/sda1 open. This is essential for safe partition table modification. Ensure your working directory is not on the ZFS pool before exporting.
cd /
sudo zpool export -f zfs-pool2.2 — Repair the GPT backup header
Open the disk with gdisk. It will detect that the backup GPT header is no longer at the end of the disk and offer to fix it. Relocate the backup data structures before touching any partitions.
sudo gdisk /dev/sda
x # enter expert mode
e # relocate backup data structures to end of disk
m # return to main menu2.3 — Resize partition 1 (still inside gdisk)
Print the current table and confirm the start sector of partition 1. This should match what you recorded in step 0.2 (typically 2048). Then delete and recreate the partition at the same start sector, extending to the end of the disk.
p # print table; confirm Start Sector of partition 1
d # delete partition 1
n # new partition
# Partition number: 1
# First sector: 2048 <-- type the EXACT start sector from above
# Last sector: [Enter] (accept default, end of disk)
# Hex code: BF00
w # write table and exit; confirm with Y2.4 — Notify the kernel and re-import the pool
Force the kernel to re-read the partition table, then import the pool.
sudo partprobe /dev/sdaIf partprobe reports an error or the partition does not appear correctly in /proc/partitions, use the fallback:
sudo blockdev --rereadpt /dev/sdaThen import:
sudo zpool import zfs-pool2.5 — Expand the vdev to fill the new partition
Tell ZFS to use the full extent of the resized partition.
sudo zpool online -e zfs-pool sda1Phase 3: Post-Expansion Verification
3.1 — Confirm the new capacity
Compare against the values recorded in step 0.4. The pool SIZE should reflect the additional space.
zpool list zfs-pool
zfs list3.2 — Verify pool health
Confirm the vdev is ONLINE with zero errors.
sudo zpool status zfs-pool3.3 — Run a scrub
Validate data integrity across the expanded vdev. This is not strictly necessary since the existing data has not moved, but it confirms that ZFS can read every block without errors after the partition table change.
sudo zpool scrub zfs-poolMonitor progress with:
zpool status zfs-pool # look for: scan: scrub in progress3.4 — Clean up the pre-resize snapshot
Once you are satisfied that the expansion is successful and the scrub has completed cleanly, remove the safety snapshot to reclaim space.
sudo zfs destroy -r zfs-pool@pre-resize- Log in to post comments