hashcat Forum
A confusion regarding SL3 - Printable Version

+- hashcat Forum (https://hashcat.net/forum)
+-- Forum: Deprecated; Ancient Versions (https://hashcat.net/forum/forum-46.html)
+--- Forum: Very old oclHashcat-lite Support (https://hashcat.net/forum/forum-22.html)
+--- Thread: A confusion regarding SL3 (/thread-464.html)

A confusion regarding SL3 - .::Rizwan::. - 09-05-2011

Today i have calculated a code
[s]tatus [p]ause [r]esume [q]uit =>
Status.......: Cracked
Hash.Type....: SL3
Time.Running.: 2 mins, 23 secs
Time.Left....: 22 mins, 3 secs
Plain.Text...: ********0108050609090300090609
Plain.Length.: 15
Speed........: 8041.6M/s
Progress.....: 989353176698880/1000000000000000 (98.94%)
HW.Monitor.#1: 99% GPU, 80c Temp
HW.Monitor.#2: 94% GPU, 75c Temp
HW.Monitor.#3: 94% GPU, 75c Temp
HW.Monitor.#4: 94% GPU, 75c Temp

Started: Mon Sep 05 15:15:44 2011
Stopped: Mon Sep 05 15:18:10 2011

the code calculated is "xxxxxxxxxx02080105060200090609"

according to code it should be calculated on 96.90% but as you can see in log it is calculated on 98.94%.

Any explanation?

RE: A confusion regarding SL3 - Rolf - 09-05-2011

It's ok, it happens because of massively parallel architecture of GPUs.

RE: A confusion regarding SL3 - .::Rizwan::. - 09-05-2011

yes but it not happens every time just rarely.

RE: A confusion regarding SL3 - atom - 09-05-2011

yeah, it is correct, massively parallel architectures require to work with huge blocks of plaintext candidates on every kernel launch. it can (but not must) happen that the cracked plaintext was at the end of such a block, then a fresh new one begun while a parallel thread that handles the success message wasnt started. there are no locking mutexes to control this thread since it is not required in single hash cracking because in single hash cracking it will crack only 1 time a hash.

RE: A confusion regarding SL3 - .::Rizwan::. - 09-05-2011

understood but it happens every time when i crack this particular hash.
tried many times just to check and same happens every time.