0

My question is similar to some "speed vs. cores"-questions that have already been asked on this website. However, I am interested in some very specific and technical points. So I hope that my question will be eligible for an "answer" instead of being solely opinion-based.

At my workplace, we frequently approach certain statistical problems using computer simulation. The software we use is largely meant for single cores, but we run it in parallel using multiple instances of these programs. The simulations are computationally intensive, and one iteration may take up to one hour to complete.

To enhance the speed of these calculations, I have been asked to propose a few models that would be best suited. However, I am unsure if, at this point, the computations would benefit from higher clock speed more than they would from more parallelized processes.

The computers we currently use are server-scale solutions that contain multiple CPUs at relatively high speed (16 physical cores, 2.9GHz each, and no GPU). So the decision cooks down to two options:

  • investing in similar machines with slightly higher clock speed (e.g., 3.2GHz) and the same number of cores (say 16), or alternatively...
  • stepping down on the clock speed (e.g., 2.6GHz) and going for a larger number of cores (say 20 or 24).

I am unsure if increased clock speed would pay off even in computationally intensive applications because I assume that the performance does not increase linearly with clock speed. Strictly speaking, I could simply approach the problem like this:

  • 3.2GHz * 16 cores = 51.2GHz, or alternatively...
  • 2.5GHz * 24 cores = 60.0GHz

However, I am pretty sure this calculation is flawed in a number of ways. But in what way exactly? Money is not really an issue in this paarticular case, and computation using GPUs must be ruled out, unfortunately.

The machines will run Windows Server 2012 R2 and will be used exclusively for this kind of calculations. All programs involved are optimized for 64bit, but occasionally 32bit software programs be involved as well. Memory and HDD should not be a huge factor to consider.

Cheers!

SimonG
  • 101

1 Answers1

1

The calculations are not precise. They are from mathematical point of view, but in computing you actually need to multiply by 0.9 to 0.75 to get the real "power" And more cores/processors mean lower number. This happen because of computer power you need to parallelize the tasks and them to build the final result from different threads.

Romeo Ninov
  • 7,848