I'm guessing there's no hard limit, but I know that performance degrades when you put too many files into a single Windows folder. Does anybody have any good rules of thumb for when it becomes noticeably slow to open a file?
10 Answers
Assuming NTFS here, in which case the technical limit is around 4 billion files. And until you go over 10s of thousands per directory you really should not worry too much.
Note however that programs like Explorer suffer much sooner than 10s of thousands, because they try to access all files in a given directory to get meta-data, etc.
- 2,535
Opening a file won't be very slow regardless of how many files you have in a folder. What certainly is going to kill you is enumerating files in that folder. So taking a look at the contents of that folder with Explorer, Far, dir, Get-ChildItem, whatever.
That being said, I have around 2.5k files and folders in my temp folder and display is instantaneous, so that's apparently still a small number.
ETA: Ok, just tried it, 10000 files in a folder take around one second to open that folder in Far, this and 20000 files don't even matter in Explorer.
- 41,098
For a decent consumer grade hardware, 150K files per folder is the number I have come across as per Windows 10 build 18362.356 using its native Explorer on a WD Blue 4TB hard disk drive in NTFS (partitioned 2 TB + 2TB ). For all files at fixed size of 24KB and filetype .7z . 150K is the number of files the explorer can display, I can select and do some operation. Any larger in the same folder and the windows explorer starts crawling.
It's highly likely this number is dependent on the File Explorer, filesystem, OS, drive speed, drive type (SSD/HDD/Raid HDD etc.) and also the supporting hardware itself like the Storage controllers, CPU and health of the SATA cables (or PATA or m.2 socket). For exammple the SouthBridge controllers would probably have a slower performance than CPU/NB controllers and connecting the drive to the SB should be a slower performance overall. Also 7zip's file explorer is much faster than Windows File Explorer in selecting huge number of files, in the range on 100Ks. I am not certain about the file sizes and if they will or won't affect the read time but my other folders with 1000s of images per folder take a long time, is it due to them having filetype of .jpg/.png etc. or is it due to their huge sizes or is it due to the explorer trying to generate thumbnails for them I am not sure. I have seen windows skips generating thumbnails for images >20 MB so that can be a valid concern.
If you want to be on the performant side, having about 50K files per folder in my opinion would be better as you wouldn't need to worry about different explorer's or os' etc. causing the file explorer to crash or take minutes of selection/display time.
- 131
update 2021: as stated somewhere around 4 billion according to MS, I have tested with 1,8 million files in one dir on Windows 10, Win-Explorer scrolling and opening a random file was as fast as 1 file in a directrory
- 41
I use Windows 10, and 25,000 images in one folder (average size of the file is 500 KB took more than an hour to load completely in the folder. The suggested number of files in one folder is 5000.
- 21
I had trouble some years ago with a directory which had about 30000 files, and new files couldn't be written (it was the "temp" directory of eMule...), it was on a FAT32 partition, but it's possible that I was using Win98 at the time, and that it was a limit of the OS itself.
- 71
I see this is an old question but I'll chime in anyway. I work for a hosting company with 300+ customers. Some of them have millions of files. I'm aware of at least one that has 6.6 million files in a single directory (this is on a Windows server with NTFS). It does take a while to enumerate the files if we need to, but the actual customer only reads files on an individual basis. Performance is the same for them as it is for other customers with only a handful of files.
- 2,180
I have run into this issue in serval different instances. As a result I have adopted a strategy of using a hive structure where each hive layer will hold 1000 subfolders each, if more items are needed to be housed in the hive then add another layer, and so on. I manage one hive that can hold 4,000,000,000 items with each item in its own sub folder at the bottom of the hive. Currently each item has anywhere between 5 to 500 files being associated with a single item. because of the file sizes we deal with, the hive must be spread across multiple 32 TB volumes. We use 32TB volumes allocated from a raid 5 array built from 12 16TB NVMe drives. The 32TB volume size is a good compromise between min allocation size and wasting space. The server is connected to the other local servers over 10Gbps Ethernet network, and can typically hit over 1+ GBps file transfers. It's a beast, and a quick beast to boot...
- 119
It depends on the file system. NTFS is way better than FAT32. FAT32 has a hard limit. However, the rule of thumb I go by is about 500 per directory.
- 119