vclHashcat-plus64 [s]tatus
#1
I have 32 GPUs in 8 servers and I'm trying to generate a few benchmarks. I'm running a crack using 16 GPUs against a WPA handshake that I had laying around, but it appears that my GPUs are being underutilized:

Quote:./vclHashcat-plus64.bin --gpu-temp-disable -m 2500 xxxxx.hccap rockyou.txt -r rules/best64.rule
oclHashcat-plus v0.14 by atom starting...

Hashes: 1 total, 1 unique salts, 1 unique digests
Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes
Rules: 78
Workload: 16 loops, 8 accel
Watchdog: Temperature abort trigger disabled
Watchdog: Temperature retain trigger disabled
Device #1: Tahiti, 2048MB, 0Mhz, 32MCU
Device #2: Tahiti, 2048MB, 0Mhz, 32MCU
Device #3: Tahiti, 2048MB, 0Mhz, 32MCU
Device #4: Tahiti, 2048MB, 0Mhz, 32MCU
Device #5: Tahiti, 2048MB, 0Mhz, 32MCU
Device #6: Tahiti, 2048MB, 0Mhz, 32MCU
Device #7: Tahiti, 2048MB, 0Mhz, 32MCU
Device #8: Tahiti, 2048MB, 0Mhz, 32MCU
Device #9: Tahiti, 2048MB, 0Mhz, 32MCU
Device #10: Tahiti, 2048MB, 0Mhz, 32MCU
Device #11: Tahiti, 2048MB, 0Mhz, 32MCU
Device #12: Tahiti, 2048MB, 0Mhz, 32MCU
Device #13: Tahiti, 2048MB, 0Mhz, 32MCU
Device #14: Tahiti, 2048MB, 0Mhz, 32MCU
Device #15: Tahiti, 2048MB, 0Mhz, 32MCU
Device #16: Tahiti, 2048MB, 0Mhz, 32MCU

When I type "s" to see the status I don't get any response, so I'm forced to use other means to check the speed:

Quote:$ amdconfig --adapter=all --odgt

Adapter 0 - AMD Radeon HD 7900 Series
Sensor 0: Temperature - 39.00 C

Adapter 1 - AMD Radeon HD 7900 Series
Sensor 0: Temperature - 37.00 C

Adapter 2 - AMD Radeon HD 7900 Series
Sensor 0: Temperature - 39.00 C

Adapter 3 - AMD Radeon HD 7900 Series
Sensor 0: Temperature - 36.00 C

I expect them to be running much hotter... ctrl+c to abort:

Quote:Session.Name...: oclHashcat-plus
Status.........: Aborted
Rules.Type.....: File (rules/best64.rule)
Input.Mode.....: File (rockyou.txt)
Hash.Target....: xxxxxxx (xx:xx:xx:xx:xx:xx <-> xx:xx:xx:xx:xx:xx)
Hash.Type......: WPA/WPA2
Time.Started...: Sun May 5 13:07:31 2013 (3 mins, 20 secs)
Time.Estimated.: Sun May 5 16:03:00 2013 (2 hours, 52 mins)
Speed.GPU.#1...: 6627/s
Speed.GPU.#2...: 6598/s
Speed.GPU.#3...: 6441/s
Speed.GPU.#4...: 6628/s
Speed.GPU.#5...: 6587/s
Speed.GPU.#6...: 6708/s
Speed.GPU.#7...: 6672/s
Speed.GPU.#8...: 6573/s
Speed.GPU.#9...: 6483/s
Speed.GPU.#10...: 6540/s
Speed.GPU.#11...: 6598/s
Speed.GPU.#12...: 6723/s
Speed.GPU.#13...: 6557/s
Speed.GPU.#14...: 6355/s
Speed.GPU.#15...: 6487/s
Speed.GPU.#16...: 6479/s
Speed.GPU.#*...: 105.1k/s

So... Question #1: Why doesn't the [s]tatus command work?

Question B) Are these speeds normal for a 7970 GPU?
#2
status doesn't work, you just have to live with it.

no, those speeds are not normal for a 7970. a single 7970 should be able to pull about 130 kh/s. but, those speeds are probably to be expected with the amount of work you've given it. rockyou + base64 is hardly any work, especially for that many devices.

it would also be helpful if you told us about your cluster architecture, broker node specs, network topology, etc.
#3
(05-06-2013, 03:10 AM)epixoip Wrote: status doesn't work, you just have to live with it.

no, those speeds are not normal for a 7970. a single 7970 should be able to pull about 130 kh/s. but, those speeds are probably to be expected with the amount of work you've given it. rockyou + base64 is hardly any work, especially for that many devices.

it would also be helpful if you told us about your cluster architecture, broker node specs, network topology, etc.

The broker is a dedicated server with a 3Ghz E5450 Xeon with 16GB of RAM and a quad-port Intel Gb ethernet card. Each port is connected to a separate Gb switch. Two of the compute nodes, each with four GPUs, are connected to each switch.

I'm hoping it's not a network bandwidth issue... Infiniband isn't an easy option with the motherboard in the compute nodes.
#4
broker specs are solid, although more ram may be needed if you ever decide to use all 32 GPUs. you won't be able to use very high -n values with that "little" ram.

you're on ethernet, so you're guaranteed to be network-bound. but your network issues should mostly be latency related, not bandwidth related. infiniband latencies are 1/100th of ethernet latencies.

this should be easy to test: just throw a ton of work at the cards and monitor the bandwidth. if your pipes are saturated and you're not achieving full acceleration, then bandwidth is the problem. if your pipes aren't saturated but you're still not achieving full acceleration, then latency is the problem.

test with different algorithms, different attack modes, etc. you will get different results for various combinations of each.
#5
use -u 4096 for wpa/wpa for max speed (in case this is not an vcl issue)