4

I am trying to make a large file system in linux, but have run into problems with many of the common file systems.

  • JFS has a bug that does not allow expansion over 32TB.
  • XFS has a bug in the fsck that causes the machine to use all available memory and crash when running the fsck on a disk that has a large amount of data (~20TB).
  • EXT4 is limited to 16TB due to a problem with e2fsprogs.
  • BTRFS will be nice, but they do not currently have a fsck, which I will need.

Any other ideas?

2 Answers2

4

It may not be as fast as the others, being only a userland fuse based system in Linux, but ZFS may fit the bill...

The name originally stood for "Zettabyte File System". The original name selectors happened to like the name, and a ZFS file system has the ability to store 258 zettabytes, where each ZB is 270 bytes.

ZFS is a 128-bit file system, so it can address 1.84 × 1019 times more data than 64-bit systems such as NTFS. The limitations of ZFS are designed to be so large that they would never be encountered. Some theoretical limits in ZFS are:

  • 248 — Number of entries in any individual directory
  • 16 exabytes (16×1018 bytes) — Maximum size of a single file
  • 16 exabytes — Maximum size of any attribute
  • 256 zettabytes (278 bytes) — Maximum size of any zpool
  • 256 — Number of attributes of a file (actually constrained to 248 for the number of files in a ZFS file system)
  • 264 — Number of devices in any zpool
  • 264 — Number of zpools in a system
  • 264 — Number of file systems in a zpool

There are some who say that there aren't enough atoms in the earth crust to make a file storage array big enough to exceed the limitations of ZFS.

Majenko
  • 32,964
0

ZFS is a possible solution, but nowadays one could also use filesystems that you excluded in the beginning

  • ext4: petabyte filesystemd should not be an issue in recent e2fsprog and kernel versions
  • btrfs: has working btfsck and can also scale to large sizes.