2

Is there such a point where using GPU Processing or Coprocessors (such as the Intel Xeon PHI card or the Nvidia Tesla card) can actually reduce the speed of which a software computes data?

Say I had a massive cluster of external PCI-E Expansions (like this one http://www.cyclone.com/products/expansion_systems/FAQ.php), all connected to the same computer. Due to the data having to be distributed over the expansions and the GPUs within said expansions, would it not theoretically actually slow the rate at which data gets processed?

Just wondering. If this is not the case, why?

Ben Franchuk
  • 1,769

1 Answers1

2

There is a point at which you will saturate the resources of your CPU, and the GPU's will be sitting idle. There is also a point at which you could run out of bus resources. Since it is a bus, there is a maximum amount of transferrable data per unit time, which could again cause GPU's to sit idle.

That being said, adding GPUs should not decrease performance, but fail to improve it further.


Speaking computationally, there are also some problems which attempting to do them on a GPU could be slower than doing them on a CPU. Algorithms like scrypt are specifically designed to use high amount of ram to prevent the non-linear speed-ups received by implementation on FGPAs and GPUs.

GPUs only provide speed increases when there are many parallel operations taking place. Calculating a single multiplication would be no quicker. GPUs also are generally not fond of branching (conditional code execution).

Mitch
  • 1,215