Autotuning and -a 6 on single GPU
#1
I have an Nvidia Geforce GTS 450(slow) and I have a question about autotuning. On IRC I asked atom why -a 6 caused autotuning to tune the -n parameter from 8 to 1 and he answered because the GPU did not have enough to do. Here are comparisons between -a 0(mask preapplied) and -a 6. I used the -u 128 parameter since -n was autotuned.

Code:
cudaHashcat-plus64.exe -m 2500 -a 0 handshake.hccap regular.dict
12404/s

cudaHashcat-plus64.exe -m 2500 -a 0 -u 128 handshake.hccap regular.dict
13239/s

cudaHashcat-plus64.exe -m 2500 -a 6 handshake.hccap nonumbers.dict ?d?d?d
9991/s

cudaHashcat-plus64.exe -m 2500 -a 6 -u 128 handshake.hccap nonumbers.dict ?d?d?d
11254/s

Does this mean that atom was correct and the GPU actually does not have enough work or is the limitation intended for multi-GPU systems(which I have read is where the high -n values are problematic)? GPU Utilization drops with -a 6 as well according to the built-in HWMonitor.

As an aside, I've noticed that -a 6 causes a capacitor squeal on my card whereas -a 0 or -a 3 do not. Folding@home does this as well but this is the first time I've noticed it in cudaHashcat.
#2
The autotuning is always active in case you are not giving a GPU enough work. It does not matter which attack-mode is used nor does it matter if its a multi-gpu system. Actually both, attack-mode and multi-gpu, influence the workload distribution. This will indirectly trigger the autotuner sooner or later. Just make sure to give it enough work.
#3
(03-30-2013, 09:54 AM)atom Wrote: The autotuning is always active in case you are not giving a GPU enough work. It does not matter which attack-mode is used nor does it matter if its a multi-gpu system. Actually both, attack-mode and multi-gpu, influence the workload distribution. This will indirectly trigger the autotuner sooner or later. Just make sure to give it enough work.

I understand autotune is used to avoid cases where higher values of -n will in fact decrease cracking performance.
But sometimes autotune decreases performance. No?
For example:

Code:
Time.Started...: Fri Apr 12 18:35:11 2013 (3 mins, 57 secs)
Time.Estimated.: Fri Apr 12 18:42:48 2013 (3 mins, 38 secs)
Speed.GPU.#1...:     1258/s
Speed.GPU.#2...:      977/s
Speed.GPU.#3...:      721/s
Speed.GPU.#4...:      576/s
Speed.GPU.#*...:     3532/s
Recovered......: 10/2934 (0.34%) Digests, 10/2934 (0.34%) Salts
Progress.......: 1546984950/2969254944 (52.10%)
Rejected.......: 3686452/1546984950 (0.24%)
HWMon.GPU.#1...:  6% Util, 35c Temp, 85% Fan
HWMon.GPU.#2...:  0% Util, 35c Temp, 85% Fan
HWMon.GPU.#3...:  4% Util, 33c Temp, 85% Fan
HWMon.GPU.#4...:  0% Util, 31c Temp, 85% Fan

I'm using rule-based attack with gpu-loops and gpu-accel set to max (then autotune brings it down).

It's just weird that I can't seem to have all four GPUs running in a relatively high load (>60%).
#4
No there is no such reason. For me it looks like you're using to less rules.