I was trying to get the execution time of a particular piece of code (may be a loop or a function, etc.). I heard that command time or function clock() do the job. But my requirement was accuracy in milli/micro seconds. So I wrote something like this.
int main()
{
    struct timeval ts1, ts2;
    long long time1, time2, diff;
    int i,var;
    scanf("%d",&var);
    gettimeofday(&ts1, NULL);
    time1 = (ts1.tv_sec * 1000000) + ts1.tv_usec;
    for (i=0; i<var; i++); // <-- Trying to measure execution time for the loop
    gettimeofday(&ts2, NULL);
    time2 = (ts2.tv_sec * 1000000) + ts2.tv_usec;
    printf("-------------------------\n");
    diff = time2 - time1;
    printf("total %ld microseconds\n", diff);
    printf("%ld seconds\n", diff/1000000);
    diff %= 1000000;
    printf("%ld milliseconds\n", diff/1000);
    diff %= 1000;
    printf("%ld microseconds\n", diff);
    printf("-------------------------\n");
    return 0;
}
I have two concerns here
- Is the above code reliable and do what my intention is? I'm not quite sure about it ;)
- When I compile the code with optimization level -O2, that's not at all working. I know -O2 will apply some make-up but how I'll see what happened? If I'm good to go with 1, can anyone please suggest how'll recover the O2 issue?
Appreciate the help! Thanks.
 
     
     
     
    