In your current implementation
int a = 1, b = 2;
for (int n = 0; n < 100; n++)
{
int c = a + b + n;
int d = a - b - n;
}
you're doing nothing: both c and d are local vairables, which exist
within for loop scope only; if optimizer is smart enough to find out that
there's no possibility of integer overflow (both 1 + 2 + 100 and
1 - 2 - 100 are within [int.MinValue..int.MaxValue]) it can well
eliminate the entire loop(s) with warning to developer.
Real world example is
for (int n = 0; n < N; n++)
{
f(n);
g(n);
}
Versus
for (int n = 0; n < N; n++)
f(n);
for (int n = 0; n < N; n++)
g(n);
where both f(n) and g(n) don't have side effects and N is large enough.
So far so good, in the 1st case the execution time is
T = f(0) + g(0) +
f(1) + g(1) +
...
f(N - 2) + g(N - 2) +
f(N - 1) + g(N - 1)
In the 2nd case
T = f(0) + f(1) + ... f(N - 2) + f(N - 1) +
g(0) + g(1) + ... g(N - 2) + g(N - 1)
As you can see, the execution times are the same (not only O(...)).
In real life, it can be miniscule difference between two implementations:
loop initialization and implementation details, CPU register utilizations etc.