I'm doing some machine learning work that benefits tremendously from using the GPU. I'm kind of at the limits of my current setup (A workstation with a single GTX580) and I really don't have room for another computer at home. So I'm looking to build a GPU Server (and very possible several of them) and trying to find the most cost effective way to do so.
Ideally I'd like to build something like NVidia's tesla servers (e.g. s2075), but with GTX580s instead of Tesla cards. This fits 4 cards into a 1u chassis which is then connected via PCI-e extenders to a host system. A DIY version of this doesn't seem to exist.
So my next plan is going 4u and basically putting a standard quad SLI build in that. I'd probably use 2 850watt PSUs to power the 4 cards. Cooling could also be an issue.
So my questions are specifically this:
- If I'm primarily using the GPU and only using the CPU for handling basic logic and stuff, is it reasonable to use a low end CPU like an i3?
- If I want to co-locate, wouldn't this be fairly expensive/use a lot of power?
- Am I going about this the wrong way and is there a much easier/more cost effective way to build GPU number crunchers and not keep them in my apartment?