1

I am running zfs pools on a ubuntu 24.04LTS server. (There is no effort to run the server boot on zfs.) I have just today physically moved the server (A Silverstone 8-bay case with mods) from one shelf to another. Before the move, there were two ZPOOLs: "Main-HDDs", a 6 X 8TB Raid-Z2 pool, and SSDs, (a 3 X 1TB Raid-Z2 pool).

Prior to the move, this was the result of zpool list:

$> zpool list -pH -o name,size,allocated,free,checkpoint,expandsize,fragmentation,capacity,dedupratio,health,altroot
Main-HDDs       47828755808256  12714018078720  35114737729536     -       -     0%      26%     1.00x   ONLINE  -
SSDs             2989297238016     12120072192   2977177165824     -       -    10%       0%     1.00x   ONLINE  -

Along with the move, I did some zpool work on the Main-HDDs pool, and now this is the status shown by the same command:

$> zfs list -pH -o name,available,filesystem_count,volsize,mountpoint,readonly,type
Main-HDDs       23134039338808  none    -       /Main-HDDs      off     filesystem
Main-HDDs/newusrlocal   23134039338808  none    -       /usr/local2     off     filesystem
Main-HDDs/usrlocalbackups       23134039338808  none    -       /usr/local/backups/     off     filesystem
Main-HDDs/www   23134039338808  none    -       /www    off     filesystem

Note that there is no mention of the SSDs pool. A list of devices (a la fdisk -l with massaging) reveals that these are the /dev/sd? devices currently available:

Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: Samsung SSD 870
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 850
Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: Samsung SSD 870
Disk /dev/sde: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: Samsung SSD 870
Disk /dev/sdf: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: Samsung SSD 870
Disk /dev/sdh: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 850
Disk /dev/sdi: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: Samsung SSD 870
Disk /dev/sdj: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 850
Disk /dev/sdg: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000VN004-2M21
Disk /dev/sdk: 57.3 GiB, 61524148224 bytes, 120164352 sectors Disk model:  SanDisk 3.2Gen1
Disk /dev/sda: 119.24 GiB, 128035676160 bytes, 250069680 sectors Disk model: SAMSUNG SSD RBX

So those 1TB drives that made up the SSDs pool are still present in the system.

What might I need to do in order to regain my SSDs pool (those 3 1TB drives) and what might have caused such a thing?

By the way, here is what 'zpool history' shows me for the past few days. Although I am sure I have "done things" with the SSDs pool, it looks like it is remembered no more.

$> zpool history
2024-07-22.20:24:13 zfs create -o mountpoint=/usr/local2 Main-HDDs/newusrlocal
2024-07-22.20:32:07 zfs create Main-HDDs/newusrlocal/backups
2024-07-22.20:32:47 zfs create Main-HDDs/newusrlocal/bin
2024-07-22.20:32:47 zfs create Main-HDDs/newusrlocal/etc
2024-07-22.20:32:47 zfs create Main-HDDs/newusrlocal/games
2024-07-22.20:32:47 zfs create Main-HDDs/newusrlocal/include
2024-07-22.20:32:47 zfs create Main-HDDs/newusrlocal/lib
2024-07-22.20:32:48 zfs create Main-HDDs/newusrlocal/log
2024-07-22.20:32:48 zfs create Main-HDDs/newusrlocal/man
2024-07-22.20:32:48 zfs create Main-HDDs/newusrlocal/sbin
2024-07-22.20:32:48 zfs create Main-HDDs/newusrlocal/share
2024-07-22.20:32:49 zfs create Main-HDDs/newusrlocal/src
2024-07-24.21:00:04 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.21:15:20 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.21:17:45 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.21:30:28 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.22:23:16 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.22:26:50 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.22:38:00 zfs create -o mountpoint=/www Main-HDDs/www
2024-07-24.22:41:13 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.22:49:40 zfs create -o mountpoint=/usr/local/backups/ Main-HDDs/usrlocalbackups
2024-07-24.23:07:25 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.23:11:24 zpool import -c /etc/zfs/zpool.cache -aN
2024-07-24.23:13:01 zpool clear Main-HDDs
2024-07-24.23:13:48 zpool upgrade Main-HDDs
2024-07-24.23:40:59 zfs destroy Main-HDDs/newusrlocal/backups
2024-07-24.23:41:17 zfs destroy Main-HDDs/newusrlocal/bin
2024-07-24.23:41:32 zfs destroy Main-HDDs/newusrlocal/etc
2024-07-24.23:42:25 zfs destroy Main-HDDs/newusrlocal/games
2024-07-24.23:42:42 zfs destroy Main-HDDs/newusrlocal/include
2024-07-24.23:42:54 zfs destroy Main-HDDs/newusrlocal/lib
2024-07-24.23:43:03 zfs destroy Main-HDDs/newusrlocal/log
2024-07-24.23:43:16 zfs destroy Main-HDDs/newusrlocal/man
2024-07-24.23:43:24 zfs destroy Main-HDDs/newusrlocal/sbin
2024-07-24.23:43:46 zfs destroy Main-HDDs/newusrlocal/share
2024-07-24.23:43:58 zfs destroy Main-HDDs/newusrlocal/src

EDIT while it may not be helpful, it might to show the output of zfs list from a couple of days ago. So here is that:

$> zfs list -pH -o name,available,filesystem_count,volsize,mountpoint,readonly,type
 Main-HDDs       23135171480632  none             -    /Main-HDDs      off     filesystem
SSDs             1819774637872  none             -    /SSDs           off     filesystem
SSDs/hs-b44gx    1920988014176   -    107374182400    -               off     volume

And here is now:

Main-HDDs       23134039216048  none    -       /Main-HDDs      off     filesystem
Main-HDDs/newusrlocal   23134039216048  none    -       /usr/local2     off     filesystem
Main-HDDs/usrlocalbackups       23134039216048  none    -       /usr/local/backups/     off     filesystem

EDIT

I have found that the pool was easily recovered by typing

sudo zpool import SSDs

This immediately made the pool available again, and in good status. But still this begs the question of why the pool disappeared in the first place.

user219095
  • 65,551
Dennis
  • 225

0 Answers0