02-20-2019, 06:03 AM
(02-20-2019, 12:18 AM)Chick3nman Wrote:(02-19-2019, 11:16 PM)FrostByte Wrote: So the dual xeons are pretty much necessary in order to get the most PCIe lanes possible?
Not dual Xeons specifically, just something built for real workstation use. A board with PLX chips is not the end of the world, you likely wont care about the difference in performance from switching like that. But the threadripper platforms i can find have a hard limit and seemingly no real workstation boards.
What about this:
TYAN FT77CB7079 - 10x 3.5” SATA/SAS - 8x NVIDIA GPU - Dual 1-Gigabit Ethernet - 2000W Redundant (2+1)
2 x Six-Core Intel® Xeon® Processor E5-2603 v4 1.70GHz 15MB Cache (85W)
4 x 16GB PC4-19200 2400MHz DDR4 ECC Registered DIMM
480GB Intel® SSD D3-S4610 Series 2.5" SATA 6.0Gb/s Solid State Drive
No Operating System
Those CPUs carry 40 PCIe lanes a piece.
Only downside with this is that this is a 4U rackmount. It's a configuration similar to the blog post here -- https://www.shellntel.com/blog/2017/2/8/...rd-cracker