I have a rather large repository (11 GB, 900,000+ files) and having trouble with iterating within reasonable time. After a bit of profiling, the real bottleneck seems to be git update-index:
$ time git update-index --replace $path > /dev/null
real    0m5.766s
user    0m1.984s
sys     0m0.391s
That makes an unbearable number of days to get the list of files. Is there any way to speed the update-index operation up?
For what it's worth, I'm running cygwin on Windows 7.
EDIT: To put more context to the question.
The large repository comes from an SVN import, and contains a number of binaries that shouldn't be in the repository. However, I want to keep the commit history and commit logs. In order to do that, I'm trying to replace the contents of the binaries with file hashes, which should compact the repository and allow me to retain history.
 
    