1

I would like to move an active pool's data to a new pool, retiring the old pool and making the new pool live in its place, without any downtime. I imagine it going something like:

  1. Create new pool
  2. Temporarily mirror new pool with old pool while live
  3. Remove old pool from mirror leaving new pool behind

Is there a standard workflow for this?

System OS: Linux, CentOS

"Old Pool": 5x 1 TB drives (stripe, zero redundancy, 5 TB available)

"New Pool": 4x 2 TB drives (raid 5, redundancy, 6 TB available)

For clarity:

"Old Pool" contains data.

"Old Pool" is running live in a fileserver.

"New Pool" is not live. Yet.

Objective 1: Replace "Old Pool" with "New Pool".

Objective 1a: Make new pool live.

Objective 1b: "New Pool" contains data originally on "Old Pool".

Objective 2: Retire "Old Pool".

Requirement: zero downtime.

J Collins
  • 878

1 Answers1

1

(Updated.)

What you want is not possible because vdevs cannot be removed from pools containing RAIDZ vdevs. It would be possible if you forget about RAIDZ, which is a better choice anyway.

From the ZFS man page:

zpool remove [-np] pool device...

Removes the specified device from the pool. This command supports removing hot spare, cache, log, and both mirrored and non-redundant primary top-level vdevs, including dedup and special vdevs. When the primary pool storage includes a top-level raidz vdev only hot spare, cache, and log devices can be removed.

So if you do really want RAIDZ, you cannot achieve your zero downtime idea. You need to create a new pool, use zfs send and zfs receive to copy over the data and then switch the pools.

If you don’t use RAIDZ, you can just add the new devices, mirrored or not, to the existing pool and then remove the old devices, resulting in no downtime.

Note however that you can only remove one device/vdev at a time. Removing devices may be very slow if the pool is in use. Using the pool may also be terribly slow while removing devices.

user219095
  • 65,551