1

I would like to know if there are things I can do to improve the performance of NTFS when working with lots of tiny files, such as is the case when working with Git, Subversion, NPM or other tech that creates a gazillion small files. I have read other threads on SU that explains a bit why working with small files on NTFS is so slow, but I would like to know how I can improve it, if at all.

For instance, deleting a specific large node_modules directory created by the development tool NPM takes a from an instant to a few seconds at most on most Linux filesystems, but it takes 35 seconds when done using bash running in Windows Sybsystem for Linux (WSL). I know it is not WSL in itself that is the problem, as I can remember the issue being even worse when using Explorer to delete folders. I remember firing up bash on Cygwin or MSYS2 just to save time doing this a few years ago, perhaps because the GUI operation needs to do a lot of bookkeping due to needing to support "Move to paper basket" functionality (essentially traversing the entire file system).

I know from other situations that disabling Windows Defender (the Anti-Malware protection) from working on certain directories can boost performance a lot, since it seems to intercept file system calls. The package manager Chocolatey was severely handicapped by this the last time I used it, for instance, making unzipping a file take 15 minutes, compared to a few seconds when disabling Defender. Others report 5x improvement in file creation times on WSL when excluding it from being scanned (though might not be so wise). My own experiements with disabling Defender shows a small improvement (10%) for deleting files, but a 6x improvement in file creation (40s vs 4m14s)!

So let's just assume Windows Defender has had the WSL folder added to its exclude list when listing possible improvements.

oligofren
  • 1,426
  • 1
  • 25
  • 41

0 Answers0