I have a dictionary .txt file with probably over a thousand words and their definitions. I've already written a program to take the first word of each line from this file and check it against a string input by the user:
void checkWord(string input)
{
    std::ifstream inFile;
    inFile.open("Oxford.txt");
    if (inFile.is_open())
    {
        string line; //there is a "using std::string" in another file
        while (getline(inFile, line))
        {
            //read the first word from each line
            std::istringstream iss(line);
            string word;
            iss >> word;
            //make sure the strings being compared are the same case
            std::transform(word.begin(), word.end(), word.begin(), ::tolower);
            std::transform(input.begin(), input.end(), input.begin(), ::tolower);
            if (word == input)
            {
                //Do a thing with word
            }
        }
        inFile.close();
        return "End of file";
    }
    else
    {
        return "Unable to open file";
    }
}
But if I'm checking more than a sentence, the time it takes to process becomes noticeable. I've thought about about a few ways of making this time shorter:
- Making a .txt file for each letter of the alphabet (Pretty easy to do, but not really a fix in the long-term)
- Using unordered_set to compare the strings (like in this question) the only problem with this might be the initial creation of these maps from the text file
- Using some other data structure to compare strings? (Like std::map)
Given that the data is already "sorted", what kind of data structure or method should I employ in order to (if possible) reduce time complexity? Also, are there any issues with the function I am using to compare strings? (for example, would string::compare() be quicker than "=="?)
 
     
     
     
    
 
    