4

I'm getting started with ZFS and I got the basics down, but I'm having an issue with keeping it running.

Pools are created, mounts are created, I'm able to save data and see disk activity… things are looking good. However, after rebooting zpool list reports "no pools available".

I have found several articles about this on CentOS which talk about a startup script for zfs being the fix, but thus far I haven't found one on Debian.

I'm using the Debian backport zfs-dkms package, which dependency includes libzfs2linux, libzpool2linux, bfs-zed, and zfsutils-linux. I am also running in SysV init mode instead of systemd.

I've tried recreating the pool with /dev/disk/by-id as well as standards /dev/sdx devices; I've tried editing /etc/default/zfs and setting (not all at once):

ZFS_MOUNT='yes'
ZPOOL_IMPORT_ALL_VISIBLE='yes'
ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
ZPOOL_IMPORT_PATH="/dev"
ZFS_INITRD_POST_MODPROBE_SLEEP='5'

I see my pool configuration names in /etc/zfs/zpool.cache; I can re-import everything fine manually with zpool import <pool-name> and all the data is there.

Is this a timing issue on boot? I'm running short on possible ideas and any input would be appreciated.

Peter B
  • 353

4 Answers4

3

I came across a mailing list posting yesterday evening that resolves the mystery here. It indicates zfs_autoimport_disable is now compiled as set to 1/true by default! So it doesn't matter what is configured in /etc/default/zfs, the pools will never import when the zfs module loads.

So the fix for me was to add an /etc/modprobe.d/ config file (call it what you like) and define options zfs zfs_autoimport_disable=0. Now the pools are imported and the zfs file systems can be mounted either by zfs or in legacy mode.

I don't understand this decision, but now everything is working as expected...

Peter B
  • 353
1

First backup the /etc/default/zfs and then remove it.

I would recommend going for the legacy approach. This will remove the automount zfs feature and rely on /etc/fstab for mounting information. I also recommend using /dev/disk/by-id always as it the the least painful setuop.

Since I don't know your setup (pool/datasets), I'll make up an example:

  1. First list your dataset you want: zfs list take the dataset you want e.g. storagepool/backup
  2. Umount your storagepool/backup zfs if mounted. (You can check via mount | grep zfs or zfs mount)
  3. List your mountpoints with zfs get mountpoint:
NAME                    PROPERTY    VALUE                   SOURCE
storagepool/backup      mountpoint  /storagepool/backup     default
  1. Change your mountpoint zfs set mountpoint=legacy storagepool/backup

  2. Now you have to edit /etc/fstab via sudo or root Enter second line (first line is there for explaining the details)

    <device alias dataset>   <mountpoint>   <filesystem type>  <options> <dump> <fsckorder>
    storagepool/backup       /mnt/backup     zfs               defaults   0      0

Detail explanation:

  • The first field (storagepool/backup) usually physical device/remote filesystem which is to be described, in your case pool/dataset (NOTE: there is no leading backslash ('/') for zfs dataset!! (caused me many troubles))
  • The second field (/mnt/backup) specifies the mount point where the
    filesystem will be mounted.
  • The third field (zfs) is the type of filesystem on the device from
    the first field.
  • The fourth field (noauto,suid,ro,user) is a (default) list of options which mount should use when mounting the filesystem.
  • The fifth field (0) to decide if a filesystem should be backed up. If zero then dump will ignore that filesystem.
  • The sixth field (0) is used by fsck (the filesystem check utility) to determine the order in which filesystems should be checked.

default option means: rw, suid, dev, exec, auto, nouser, and async If you are using SSD, which is unlikely, I recommend using noatime after defaults option.

Now, when you reboot, the zfs filesystem should be mounted according to the /etc/fstab file.

Important Note: If you want to mount a zfs legacy you have to use mount -t zfs <dataset> <mountpoint> instead of zfs mount.

tukan
  • 2,325
0

@Peter-B Your solution solved my problem with zpool failing to import on reboot. Many thanks. A small clarification for those unfamiliar with modprobe conf syntax: the option needs to be on a single line, ie.

options zfs zfs_autoimport_disable=0

A. Bee
  • 1
0

I had the same issue but additionally, I used encryption with passwords stored in keylocation. zfs mount -l -a resolved my issue. For a more detailed description refer here https://github.com/openzfs/zfs/issues/8750