4

I've noticed when I search for a file by name (in Windows or Linux) it's typically a disk-intensive process, especially in Windows. It seems that the utility (Windows Search, or "find" in Cygwin) scans the entire directory tree, considering each file one by one.

I'm wondering, why not load the Master File Table (or equivalent, if not NTFS) into memory and parse it purely in memory? I suppose that's similar to the indexes maintained by more modern search like Windows Search, Google Desktop Search, and Spotlight, but even those are indirect. I guess filesystems don't normally make their metadata available to external programs?

I can't prove that the search isn't already based on the MFT, but it seems unlikely based on how it runs.

Stephen
  • 824
  • 4
  • 13
  • 24

1 Answers1

5

There are programs that will search using the MFT on Windows NTFS volumes, e.g. open source projects:

http://sourceforge.net/projects/swiftsearch/

http://sourceforge.net/projects/ntfs-search/

They're VERY fast but the problem is that once you start going straight to the MFT you by-pass functionality such as security ACLs and shell extensions. Therefore most of these programs need to run with elevated permissions and don't necessarily produce the same results as an API based search.

snowdude
  • 2,928
  • 19
  • 20