Is there such a point where using GPU Processing or Coprocessors (such as the Intel Xeon PHI card or the Nvidia Tesla card) can actually reduce the speed of which a software computes data?
Say I had a massive cluster of external PCI-E Expansions (like this one http://www.cyclone.com/products/expansion_systems/FAQ.php), all connected to the same computer. Due to the data having to be distributed over the expansions and the GPUs within said expansions, would it not theoretically actually slow the rate at which data gets processed?
Just wondering. If this is not the case, why?