I've noticed when I search for a file by name (in Windows or Linux) it's typically a disk-intensive process, especially in Windows. It seems that the utility (Windows Search, or "find" in Cygwin) scans the entire directory tree, considering each file one by one.
I'm wondering, why not load the Master File Table (or equivalent, if not NTFS) into memory and parse it purely in memory? I suppose that's similar to the indexes maintained by more modern search like Windows Search, Google Desktop Search, and Spotlight, but even those are indirect. I guess filesystems don't normally make their metadata available to external programs?
I can't prove that the search isn't already based on the MFT, but it seems unlikely based on how it runs.