hashcat Forum

Full Version: VCL network question
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
The VCL cluster how-to wiki specifically says that a high speed LAN should be used and not WAN or VPN. I assumed that WAN and VPN should not be used due to bandwidth limitations. but.... does this mean that the master and slave computers must reside within the same subnet, or can the master communicate to the slaves across a layer 3 network?

example:
master exists in 192.168.1.0/24
slaves exist in 192.168.2.0/24
Both subnets will be local and not traversing the internet.

Also, what TCP port range does vcl use?

Thanks in advance for any help!
VCL uses TCP/IP and is therefore routable, so yes, you can have nodes on different subnets. However, your router likely does not have the bandwidth necessary to support nodes on other subnets.

The bandwidth requirements depend entirely on what you are doing. For fast hashes and wordlist support you should dedicate ~ 500Mbps of bandwidth per GPU (PCIe v2.0 lane speed). For slow hashes and brute force, you can get away with less. Your broker node should have enough bandwidth for all of the GPUs in the cluster. For example, if you have 24 GPUs and are doing all fast hash or wordlist stuff, your broker will need an uplink of ~ 12Gbps.

VCL uses port tcp/255.
*whistle* I see... If I'm understanding your response correctly, the broker would need at least a 10gig uplink to fully utilize more than 2 GPUs. My broker would be on a 1gbps port, so I'm going to experience a bottleneck with anything more than 2 GPUs doing fast hashes and wordlists.

Even if the broker and slaves were on the same subnet, wouldn't the bottleneck still be there?
The bottleneck would still be there if they were on 1Gbps to the switch, yes.

No, you would need a 10Gbit uplink if you had 20 GPUs to achieve near-native performance. If you had two GPUs you could achieve near-native performance with 1Gbps.

If you're dedicated to the cause consider picking up some second-hand Infiniband equipment and doing IPoIB. HBAs are very reasonably priced, and you can probably pick up a decent switch for < $2k. The price is very reasonable compared to what most of us have invested in GPUs Smile

Something else to consider is running the broker on a compute node. Pick whichever node has the most GPUs, and run both the broker and opencld on it. This is a bit more advanced and requires a strong working knowledge to get working properly since there will be two OpenCL libraries installed, but it is certainly possible.
Right, I understand that. I should've been more specific. My setup would be two slaves-- both with two GPUs, so four total. 10gbps is the most common step up from 1gbps, but I suppose etherchanneling a couple 1 gig interfaces might be an option. Granted, I've never etherchanneled a linux box, so that will be a new frontier for me.

I've never worked with IPoIB, but it sounds interesting. I was going to use a spare Cisco 3750X in my lab.

As for running the broker on a compute node, I'll probably steer clear of that for now. I still consider myself a bit of *nix newb. :p

What flavor of linux do you use for your GPU clusters? I figured something really light.

thanks again for all the info. I really appreciate it.
yeah, you can do ethernet channel bonding. it's extremely easy to set up. with four GPUs you could get away with 2Gbps to the broker, and 1Gbps to the compute nodes.

i just ordered a 4x DDR (20Gbps) IB switch and some HCAs, so i'll be going down the IPoIB route soon. GigE is just way too slow.

most of us here use Ubuntu, because atom develops hashcat on Ubuntu and you can avoid a lot of headaches by being on the same page as him. i think most people opt for the full-blown desktop version of Ubuntu, but i like to keep things small, so so all of my systems are running a bare netinstall of Ubuntu Server 12.04 LTS. i then manually install a very minimal x11 environment, and have a standard base configuration that I apply post-installation.
Nice! What manufacturer/model are you getting? How much bandwidth are you seeing on each of your compute nodes on your current rig?
Well I haven't got it yet, it should be here on Monday. I was actually in the process of ordering everything when I replied yesterday. I actually decided against the 4x DDR setup at the last minute, and went with 4x SDR to save a bit of money.

I have five nodes in my VCL network, so I picked up five Mellanox MHEA28-XT HCAs and a Cisco SFS7000P switch for just under $800 total. It's all 4x SDR hardware, so 10Gbps / 8Gbps usable.