I often have to code explicit schemes which means that I have to look at the evolution of a function by incrementing the time variable  t <- t+ dt. It is therefore only natural to have my loops increment on dt:
int N = 7;
double t=0., T = 1., dt=T/N; // or I could have dt=0.03 for example
for(; t<T; t+= dt){
   if(T - t < dt){
      dt = T-t;
   }
   //some functions that use dt, t, T etc
} 
The rationale behind this is that I'm incrementing t by a constant  dt at each step, except at the last iteration, where if my current time t is such that T- dt < t < T then I modify my time increment by dt <- T-t.
What are the possible pitfalls of such a procedure or ways I could improve it? I do realise that I might get a very small time increment.
Are there any floating problems that might appear (should I stick to incrementing on integers)?
In terms of optimisation, I assume that this technique is not costly at all, since a basic branch prediction would almost always skip the if block.
EDIT
I realise my question wasn't really good. Usually the dt is given by a CFL condition i.e. it is given so that it is small enough compared to some other parameters.
So from a logical point of view, dt is first given and afterwards we can define an integer N=floor(T/dt), loop with integers up to N, then deal with the leftover time interval N*dt --- T.
The code would be:
double dt = //given by some function;
double t=0., T = 1.;
for(; t<T; t+= dt){
   if(T - t < dt){
      dt = T-t;
   }
   //some functions that use dt, t, T etc
} 
 
     
     
    