I made this method to read from a file and put it into a vector of strings;
std::vector<std::string> read_file_lines1(const char* filepath){
    std::vector<std::string> file;
    std::ifstream input(filepath);
    Timer timer;
    float time = 0;
    std::string line;
    int i = 0;
    while (getline(input, line)){
        timer.reset();
        file.push_back(line);
        time += timer.elapsed();
        if (i == 10000)
            std::cout << "10000 done" << std::endl;
        i = ((i + 1) % 10001);
    }
    std::cout << time << std::endl;;
    return file;
}
But the performance was really bad in my opinion (200k lines in ~22 seconds)
with a small change making it a vector<string*> (using file.push_back(new std::string(line)) pushback calls went from ~16 seconds to ~1.2 seconds what was a huge improvement (still behind my goals) and it has a small disadvantage: memory usage; if I want to clear the memory used here I will have to remember to make a loop to clear each string*
Now it takes 6~seconds for the whole method, ~5 of them are mostly used in string in the "getline" method and I would really like to know how to optimize it or make an alternative.
PS: I am doing this do load a 3D model, using the same model in Java it takes ~0.8 seconds to read everything AND FILTER (putting "each line in the" vertex/texture... array and then putting them in the index order), so I'm really disappointed if I take that much time to read each line from a file in c++ (using debug mode in both java/c++, that probably makes it quite a bad benchmark but I'm still really disappointed);