1

I was just trying to figure out deduplication in ZFS. So I created a file which was my "disk" for testing. I heave created two datasets - one with dedup turned on, the other one without dedup. The dedup-on dataset was not used, just mentioning it to say all the facts. Then I generated some random file in the non-deduping dataset:

dd if=/dev/urandom bs=1M count=1000 | base64 > a

Then:

cp a b

Took some time, consumed space doubled - everything as expected. And then…

cp a c

Instant copy, no additional space used. Looks like the deduplication is in fact on, but it is not.

While writing this question I did another test. Just a non-deduping dataset. After creation of a file, the copy of it is made instantly, no place more consumed.

Why is that so?

Giacomo1968
  • 58,727
sarmun
  • 11
  • 1

0 Answers0