While taking backup of a collection of folder containing source code (where I realized later that I could exclude certain folders containing library files like node_modules), I noticed that the file transfer speed slows to a crawl (few kb/s against the usual 60 mb/s that the backup drive allows).
I'd like to understand where the bottleneck is. Is it some computation which must be performed which interleaves with the pure I/O and thus slows the whole thing down, or is it that the filesystem index on the target drive has some central lock which must be acquired and released between files?
I'm using NTFS on the target backup drive, and it's a HDD.