05-13-2014, 03:41 AM
thanks to protocol for posting this question as I also was experiencing this issue as well.
(I have just one 7970 though)
my benchmark
Hashtype: MD5
Workload: 1024 loops, 256 accel
Speed.GPU.#1.: 8214.9 MH/s
in my dates based example 199?d?a?a?a?a?a?a
so if I am understanding it
modifier is the portion which varies ?d?a?a?a?a?a?a
and base is the fixed text 199
my speed issues were due to the much smaller base ?
1 - Further questions is there a way to speed up performace with large hashsets as it also slows down md5 performance to the same levels.
16K hashset
Session.Name...: oclHashcat
Status.........: Running
Input.Mode.....: Mask (199?d?a?a?a?a?a?a) [10]
Hash.Target....: File (smallhashset)
Hash.Type......: MD5
Speed.GPU.#1...: 708.4 MH/s
Recovered......: 0/16485 (0.00%) Digests, 0/1 (0.00%) Salts
Progress.......: 6406799360/7350918906250 (0.09%)
HWMon.GPU.#1...: 75% Util, 60c Temp, 51% Fan
38 MIllion hashset
Session.Name...: oclHashcat
Status.........: Running
Input.Mode.....: Mask (199?d?a?a?a?a?a?a) [10]
Hash.Target....: File (largedata)
Hash.Type......: MD5
Speed.GPU.#1...: 393.2 MH/s
Recovered......: 74/38970224 (0.00%) Digests, 0/1 (0.00%) Salts
HWMon.GPU.#1...: 85% Util, 58c Temp, 49% Fan
2- Can we stop the remove duplicate hashes stage in 1.20 as all my hashsets are already pre sorted and uniqued. It hurts performance in my loop of 50+ diff large wordlist tests.
I didnt notice any options for this.
thank you
(I have just one 7970 though)
my benchmark
Hashtype: MD5
Workload: 1024 loops, 256 accel
Speed.GPU.#1.: 8214.9 MH/s
in my dates based example 199?d?a?a?a?a?a?a
so if I am understanding it
modifier is the portion which varies ?d?a?a?a?a?a?a
and base is the fixed text 199
my speed issues were due to the much smaller base ?
1 - Further questions is there a way to speed up performace with large hashsets as it also slows down md5 performance to the same levels.
16K hashset
Session.Name...: oclHashcat
Status.........: Running
Input.Mode.....: Mask (199?d?a?a?a?a?a?a) [10]
Hash.Target....: File (smallhashset)
Hash.Type......: MD5
Speed.GPU.#1...: 708.4 MH/s
Recovered......: 0/16485 (0.00%) Digests, 0/1 (0.00%) Salts
Progress.......: 6406799360/7350918906250 (0.09%)
HWMon.GPU.#1...: 75% Util, 60c Temp, 51% Fan
38 MIllion hashset
Session.Name...: oclHashcat
Status.........: Running
Input.Mode.....: Mask (199?d?a?a?a?a?a?a) [10]
Hash.Target....: File (largedata)
Hash.Type......: MD5
Speed.GPU.#1...: 393.2 MH/s
Recovered......: 74/38970224 (0.00%) Digests, 0/1 (0.00%) Salts
HWMon.GPU.#1...: 85% Util, 58c Temp, 49% Fan
2- Can we stop the remove duplicate hashes stage in 1.20 as all my hashsets are already pre sorted and uniqued. It hurts performance in my loop of 50+ diff large wordlist tests.
I didnt notice any options for this.
thank you