Building a 65 GPU distributed Rig... Open to suggestions. - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Misc (https://hashcat.net/forum/forum-15.html) +--- Forum: Hardware (https://hashcat.net/forum/forum-13.html) +--- Thread: Building a 65 GPU distributed Rig... Open to suggestions. (/thread-7657.html) |
Building a 65 GPU distributed Rig... Open to suggestions. - GoBlack - 07-11-2018 I am just beginning the process of creating a 65 AMD GPU Hashcat pen-testing rig that consists of 5 13GPU servers. Just wanted to see if anyone here has done something similar in the past and get recommendations as to the best possible way this might be accomplished. I am aware of the various options available for distributed network Hashcat configurations such as Hashtopolis, Cracklord, & Hashview but would like to see what others recommend based on experience before I start. Also, are there any new options out there that might be better than the ones mentioned? Thanks for your replies in advance! RE: Building a 65 GPU distributed Rig... Open to suggestions. - Chick3nman - 07-11-2018 In case you haven't read much around the forums, AMD GPUs tend to be serious fire hazards when running hashcat. Hashcat can be a very very demanding workload, above that of almost anything else. To the point of overloading the thermal designs of many AMD GPUs and killing the card/motherboard. 65 AMD GPUs all running together is a Chernobyl disaster waiting to happen. RE: Building a 65 GPU distributed Rig... Open to suggestions. - GoBlack - 07-11-2018 The servers I'm converting use to be used for mining ETH and the rack is equipped with a ton of fans to get the heat out, as well as fans to bring air into and out of the server room. How hot are we talking? Above 90 degrees? If so that is ridiculous! LOL RE: Building a 65 GPU distributed Rig... Open to suggestions. - Chick3nman - 07-11-2018 It's not just about heat on the cards, the cards also like to draw too much power from the PCI-e slot which can damage it, even if the heat itself isn't the direct cause. Commercial GPU chassis such as the ever popular systems used for the platform of the brutalis( https://sagitta.pw/hardware/gpu-compute-nodes/brutalis/ ) still can't handle it. The 290x was a nightmare on those chassis, caused all kinds of damage under extended use. RE: Building a 65 GPU distributed Rig... Open to suggestions. - GoBlack - 07-12-2018 Im using the Asus B250 MB's which is use to maximum draw from these cards. They are all mounded on mining racks instead of in a chassis to maximize airflow. The rack is about 5 feet wide and 6 feet tall. I'll just try 13 gpu's and monitor it for 12 hours before I even consider running them all... It seems to me that if hashcat causes such big problems it needs to have a -tstop function build into it where it will shut the program down if a GPU gets above a set temperature. I would still like to know the best software for distributing one hashcat operation over 5 separate servers. I've heard alot of good things about using Hashview for this... RE: Building a 65 GPU distributed Rig... Open to suggestions. - Chick3nman - 07-12-2018 There is a thermal cutoff in hashcat, it's even configurable with a flag. That said, I'd be very careful with hashcat. Mining != max load. Even 100% util != 100% util. You will notice this when swapping between algorithms in hashcat. 100% util on bcrypt will cause less power draw/heat than 100% util on WPA or wordpress in my experience. I believe the agreed upon load test is usually MD4 brute force against a single hash with a large mask. RE: Building a 65 GPU distributed Rig... Open to suggestions. - soxrok2212 - 07-13-2018 I too would stay away from AMD cards. I repurposed an AMD mining rig for hashcat and decided to sell off my inventory for Nvidia. As for distribution software, I use hashtopolis. RE: Building a 65 GPU distributed Rig... Open to suggestions. - Flomac - 07-13-2018 Sounds like the GPUs are already existing. Which type are you using? Keep in mind that power usage is a decent factor and needs to join your calculation. |