6

I'm planning on building my first NAS box and currently I'm considering FreeNAS and ZFS for it. I read up on ZFS and it's feature set sounds interesting, although I will probably only use a fraction of it.

Most guides say that the recommended rule of thumb is that you need 1 GB of (ECC-)RAM for every TB of disk space in your pool. So my question is, what is the actual (expected) impact on ignoring this rule?

Here is a setup of someone who build a 71 TiB NAS with ZFS and 16GB RAM. According to him it run's like a charm. He uses Linux however (if this makes a difference).

So apparently you don't actually need 96 or even 64 Gigs of RAM to run such a large pool. But the rule must be there for a reason. So what happens if you do not have the recommended amount of RAM? Is it just a bit slower or do you run the risk of losing data or accessing your data at a snails pace only?


I realize that this has also a lot to do with the features that will be used, so here are the parameters I'm considering:

  • It's a home system
  • 16GB ECC RAM (the maximum supported by the setup I have in mind)
  • No deduplication, no ZIL, no L2ARC
  • Probably with compression enabled
  • Will store mostly media files of various sizes
  • Will probably run bit torrent or similar services (frequent smaller reads/writes)
  • 4 disks, probably 5 TB each
  • Actual pool setup will probably be part of another question but I think no RAIDZ (although I would be interested to know if it actually makes a difference in this context), probably two pools with two disks each (for 10TB netto storage), one acting as backup

2 Answers2

6

The only reason you would need to use that ratio of RAM to storage space, would be if you decided to use data deduplication. It does not say that the 1 GB to 1 TB ratio is a requirement.

According to a wiki:

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can either lower performance or result in complete memory starvation. Solid-state drives (SSDs) can be used to cache deduplication tables, thereby speeding up deduplication performance.

Source

DrZoo
  • 11,391
0

I ran a 30 TB Freenas with 16GB DDR3-ECC RAM from 2014 to 2017. In 2018 I increased that to 32GB maxing out the motherboard, but 8-15 GB are regularly eaten up by virtual machines. No problem at all, never, nada. As a metter of fact I've also set sync=always on some datasets, with a pair of nvme samsung 970 plus drives as mirrored log drives and around 60GB L2ARC cache (yeah, I have partitioned and used the same devices for both logs and cache, mainly for caching the virtual machines). Average writing speed is 70MB/s with sync, 200Mb/s in local . Reading speed is very good, generally more than 90 MB/s. Never had problems. Unless you have a 2.5+Gbit network, it's very unlikely you will ever see performance issues.