I'm using a open-source library for i2c bus actions. This library frequently uses a function to obtain an actual time-stamp with millisecond resolution.
Example Call:
nowtime = timer_nowtime();
while ((i2c_CheckBit(dev) == true) && ((timer_nowtime() - nowtime) < I2C_TIMEOUT));
The application using this i2c library uses a lot of CPU capacity. I figured out, that the running program the most time is calling the function timer_nowtime().
The original function:
unsigned long timer_nowtime(void) {        
    static bool usetimer = false;
    static unsigned long long inittime;
    struct tms cputime;
    if (usetimer == false)
    {
        inittime  = (unsigned long long)times(&cputime);
        usetimer = true;
    }
    return (unsigned long)((times(&cputime) - inittime)*1000UL/sysconf(_SC_CLK_TCK));
}
My aim now is, to improve the efficiency of this function. I tried it this way:
struct timespec systemtime;
clock_gettime(CLOCK_REALTIME, &systemtime);
//convert the to milliseconds timestamp
// incorrect way, because (1 / 1000000UL) always returns 0 -> thanks Pace
//return (unsigned long) ( (systemtime.tv_sec * 1000UL) + (systemtime.tv_nsec
//              * (1 / 1000000UL)));
return (unsigned long) ((systemtime.tv_sec * 1000UL)
            + (systemtime.tv_nsec / 1000000UL));
Unfortunately, I can't declare this function inline (no clue why).
Which way is more efficient to obtain an actual timestamp with ms resolution? I'm sure there is a more per-formant way to do so. Any suggestions?
thanks.
 
     
     
     
     
    