-2

If I format a 1TB disk with the default parameters, what will be the maximum number of files I can store in the root directory of each filesystem?

Edit: I did search for the information but none of the results answered this particular question. They were answers for the difference between the filesystems, the maximum number files that can be stored on them (including files in subdirectories), maximum partition size etc.

1 Answers1

3

FAT12/16 is the only filesystem with different root and subdirectory limits. For most other filesystems, the root directory is stored in the same way as subdirectories and shares the same kind of limit – only very minor adjustments need to be made if the root directory also stores "metadata" files.

  • FAT12 and FAT16: Varies (apparently 64–512 is common). From Wikipedia, "The number of root directory entries available for FAT12 and FAT16 is determined when the volume is formatted, and is stored in a 16-bit field. For a given number RDE and sector size SS, the number RDS of root directory sectors is RDS=ceil((RDE×32)/SS), and RDE is normally chosen to fill these sectors, i.e., RDE*32=RDS*SS. FAT12 and FAT16 media typically use 512 root directory entries on non-floppy media. Some third-party tools, like mkdosfs, allow the user to set this parameter."

    Note that FAT uses a variable amount of entries to store long (non-8.3) filenames – so the actual maximum number of items very much depends on how your files are actually named. Each non-8.3-compliant name requires at least 1 LFN entry in addition to the regular DOS-compatible 8.3 entry, and each of those LFN entries holds 13 characters. So a name like "Hello world!.txt" would cost you 3 slots total.

  • FAT32: Same as a regular directory. According to Wikipedia, "Microsoft's FAT32 implementation imposes an artificial limit of 65,535 entries per directory." Again, each non-8.3 name has to be stored as multiple entries.

  • exFAT: Wikipedia says "Support for up to 2,796,202 files per directory", citing the Microsoft patent. As with FAT32, there doesn't seem to be anything special about the root directory in exFAT – so it has the same limits as subdirectories do. However, exFAT has better support for long file names.

  • NTFS: Another SU thread says: "There is no fixed limit. The maximum number of files is one upper limit. This limit is either 2^23-1 (according to many driver implementations) or 2^48 -1 (according to the MFT_REF structure). As you will have LARGE directories, you will see non-resident $BITMAP_ALLOCATION streams, a large INDEX stream. The index stream is essentially a B+ tree of file names."

    Note that the NTFS root directory always contains around 15 hidden files used to store the filesystem's metadata.

  • Ext2: From Wikipedia, "The theoretical limit on the number of files in a directory is 1.3×1020, although this is not relevant for practical situations. […] Directory indexing is not available in ext2, so there are performance issues for directories with a large number of files (>10,000)."

    Additionally, Ext2 (and Ext3, but not Ext4) has a limit on immediate subdirectories within a single directory due to a maximum of 32000 links per inode.

  • Ext4: From Wikipedia, "To allow for larger directories and continued performance, ext4 in Linux 2.6.23 and later turns on HTree indices (a specialized version of a B-tree) by default, which allows directories up to approximately 10–12 million entries to be stored in the 2-level HTree index and 2 GB directory size limit for 4 KiB block size, depending on the filename length. In Linux 4.12 and later the largedir feature enabled a 3-level HTree and directory sizes over 2 GB, allowing approximately 6 billion entries in a single directory."

  • Ext3: Honestly I'm not sure where Ext3 ends and Ext4 begins – it might simply depend on which features are enabled. Ext3 has htrees (like Ext4), but does not have the 'largedir' feature.

Since FAT and Ext2 use linear file lists, you'll probably hit performance issues before you hit actual entry limits. Other filesystems use more complex data structures to achieve faster file lookup.

grawity
  • 501,077