In the following code, I allocate two arrays of ints dynamically, and only initialize one of them.
Why is it that assigning a value to the initialized variable is faster than assigning it to the uninitialized one? (~0.4 nanoseconds vs ~4 ns, compiled with g++ 7.4.0 with the -O3 flag, and n = 10000)
#include <chrono>
#include <iostream>
#define NOW std::chrono::high_resolution_clock::now
#define DURATION_CAST(x) \
std::chrono::duration_cast<std::chrono::nanoseconds>(x).count()
int main(int argc, char* argv[])
{
int n = atoi(argv[1]);
srand(time(nullptr));
int* x = new int[n];
int* y = new int[n];
for (int i = 0; i < n; ++i)
{
x[i] = rand() / (RAND_MAX / 2);
}
auto t0 = NOW();
for (int i = 0; i < n; ++i)
x[i] = 1;
auto t1 = NOW();
for (int i = 0; i < n; ++i)
y[i] = 1;
auto t2 = NOW();
double dt1 = DURATION_CAST(t1 - t0);
double dt2 = DURATION_CAST(t2 - t1);
std::cout << "Average time: " << dt1 / n << " ns" << std::endl;
std::cout << "Average time: " << dt2 / n << " ns" << std::endl;
delete[] x;
delete[] y;
return 0;
}
Here's a version with additional print statements in case the compiler optimizes away the important parts.