Posts: 2,267
Threads: 16
Joined: Feb 2013
05-11-2020, 11:24 AM
(This post was last modified: 05-11-2020, 12:18 PM by philsmd.)
wow, great testing/analysis again.
This just shows how important it is to double-check the changes...
this is actually an independent second problem now...
the qsort () call here: https://github.com/hashcat/hashcat/blob/...ain.c#L129
should have performed a comparison of the underlying "char array"/string, while it was comparing pointers (the layer above).
I think a fix like this would work:
Code: diff --git a/src/brain.c b/src/brain.c
index f659ea7d..11f21262 100644
--- a/src/brain.c
+++ b/src/brain.c
@@ -12,6 +12,7 @@
#include "convert.h"
#include "shared.h"
#include "hashes.h"
+#include "folder.h"
#include "brain.h"
static bool keep_running = true;
@@ -126,7 +127,7 @@ u32 brain_compute_session (hashcat_ctx_t *hashcat_ctx)
hcfree (out_buf);
- qsort (out_bufs, out_idx, sizeof (char *), sort_by_string);
+ qsort (out_bufs, out_idx, sizeof (char *), sort_by_stringptr);
for (int i = 0; i < out_idx; i++)
{
I'm going to suggest this improvement to atom (we will probably get rid of the "folder" include, because it doesn't make much sense in the brain code... we will probably try to refactor that part)... but the "sort_by_stringptr" fix should now also fix this "new" problem. Thx again
We have now fixed this problem with this commit: https://github.com/hashcat/hashcat/commi...fb94aba64f
beta version is up for testing: https://hashcat.net/beta/ . Thanks a lot
and do not hesitate to ask questions or report further findings/problems... these are some great discoveries that we really need to make hashcat better and more stable.
Posts: 877
Threads: 15
Joined: Sep 2017
hashcat (v5.1.0-1797-gf9e4dc0d)
Code: Session..........: hashcat (Brain Session/Attack:0x1cb940de/0xc4361992)
Hash.Target......: .\hash-1.txt
Guess.Base.......: File (.\dict-1.txt)
Guess.Queue......: 1/1 (100.00%)
Recovered........: 1/5 (20.00%) Digests
Progress.........: 10/10 (100.00%)
Rejected.........: 0/10 (0.00%)
Code: Session..........: hashcat (Brain Session/Attack:0x1cb940de/0x7c78203e)
Hash.Target......: .\hash-1.txt
Guess.Base.......: File (.\dict-2.txt)
Recovered........: 2/5 (40.00%) Digests
Progress.........: 11/11 (100.00%)
Rejected.........: 10/11 (90.91%)
Code: Session..........: hashcat (Brain Session/Attack:0x097e96b4/0x4f82d865)
Hash.Target......: .\hash-2.txt
Guess.Base.......: File (.\dict-3.txt)
Recovered........: 5/10 (50.00%) Digests
Progress.........: 12/12 (100.00%)
Rejected.........: 0/12 (0.00%)
looks like it works now like intended
just for me and better understanding, client-feature=2 (sending attack positions) is only applicable to bruteforce/mask-attacks or is it also working on dictionaries? i know hashcat uses "chunks" of work for different opencl devices distributung this work for all available devices
Posts: 2,267
Threads: 16
Joined: Feb 2013
yeah, it will work for all supported attack types supported by --slow-candidates (-S), (-a 0, -a 1, -a 3).
The most important thing though, is to get a clear picture on when to use brain and when to use some distributed overlays like hashtopolis... these approaches can't be compared and of course are not the same.
Distributed hash cracking is normally performed with some 3rd party tools like hashtopolis, while when you are trying to keep track of all the performed work on a certain hash (or hash list) which uses very slow hashing algorithm, you might consider using brain... They are very different and you shouldn't confuse one with the other
btw: hashtopolis allows using brain, as far as I know (and here it gets a little bit complicated to make the distinction, but it also proofs that distributing work is a layer above or let's say a different approach)
Posts: 877
Threads: 15
Joined: Sep 2017
one last question
is it possible to "see" what file of brain
brain.38b324b0.ldmp
brain.d8e614ff.admp
is related to what kind of hashset? as i see ldmp is related to to session and admp to different attacks.
with the new updated versions and different attack-id, there should/could be old files which will never be used again (trash)
Posts: 2,267
Threads: 16
Joined: Feb 2013
the naming convention is just:
- brain.[SESSION_ID].ldmp for hashed passwords
- brain.[ATTACK ID].admp for attack positions
that means, that you could just start a quick cracking job again and see if the IDs are related to that list or not.
if it is different, it most likely is a "wrong ID" or another hash list/attack.
Posts: 877
Threads: 15
Joined: Sep 2017
05-11-2020, 06:20 PM
(This post was last modified: 05-11-2020, 06:28 PM by Snoopy.)
UPDATE:
new error occurs with more complicated hashsets, all runs with same settings (except --session)
same hashtarget (stats below)
Hashes: 79405 digests; 79405 unique digests, 78789 unique salts
bruteforce for testing purpose, length one with mask ?a
every run ends up with its own session-id and attack-id
Session..........: test-bf (Brain Session/Attack:0x59aa9845/0x9d7dad6d)
Session..........: hashcat (Brain Session/Attack:0x1b4ce4de/0xfff95e8d)
Session..........: hashcat (Brain Session/Attack:0xc6bf0189/0x7f416e07)
Session..........: hashcat (Brain Session/Attack:0x7240377d/0x4112219a)
Session..........: test-bf (Brain Session/Attack:0x7009c96f/0x787bc4dd)
Session..........: test-bf (Brain Session/Attack:0xff66cad7/0xf3a48091)
Session..........: test-bf (Brain Session/Attack:0x4f0fa7d0/0xff48a0d0)
the hashtarget has some malformed inputs (Token length exception) but this should not be the problem as hashcat exlude them automaticly?
Posts: 2,267
Threads: 16
Joined: Feb 2013
could you please provide a minimal example (minimal number of hashes etc) ?
I don't think that --session is relevant at all, but please test this too
Posts: 877
Threads: 15
Joined: Sep 2017
05-11-2020, 07:16 PM
(This post was last modified: 05-11-2020, 07:53 PM by Snoopy.)
i think i managed to get a minimal set, will send you a link per pn, mode is 2811, file is with username (no real personas)
when testing to get this set, i noticed something, but dont know if it is related (im in hurry)
the mask ?a should result in 95 possible passwords without the salt, but when attacking i get
Recovered........: 0/299 (0.00%) Digests, 0/299 (0.00%) Salts
Progress.........: 28405/28405 (100.00%)
Rejected.........: 0/28405 (0.00%)
28405 is 95 pw * 299 hashes / salts ( is this correct? )
UPDATE
when the input is very short (20 hashes, no malformed) session-id and attack-id is calculated *correctly*, each run the same, the longer list 299 entrys with malformed entrys, each run results in different session-id and attak-id.
sry, but im in hurry, so i cannot test further, wheter it is input lengt (hashset) or the malformed entrys,
cya tomorrow
Posts: 2,267
Threads: 16
Joined: Feb 2013
okay, I discovered now what the new bug is (I'm pretty confident):
we can see here: https://github.com/hashcat/hashcat/blob/...ain.c#L129
that our list of hashes (out_bufs[]) is sorted, but we also stored the length of those buffers in out_lens[]...
but since your hashes are of different length (some shorter and some longer salts), we get the relationship of out_bufs[] (which will be sorted) and out_lens[] (which currently won't be modified or changed with the same sorting mechansim) out of sync and therefore we get random results (it will checksum with buffers that do NOT contain valid data if the string was shorter).
This is a new bug and the relationship between these two buffers will need to somehow be guaranteed and kept in sync. thx
Posts: 877
Threads: 15
Joined: Sep 2017
heyho
sry but today i was very busy, so i could not do any testing, just a quick look
some of the lines seems to be really malformed (wrong/mixed ascii/utf-8/special utf-chars to placeholder encoding and so on) longer salts seems to be escaped with " (dunno some specialchars in it?)
do u think there will be a fast update on this like merging the two arrays or using somthing like a dictionary/hashlist (dont know how it is called in c/c++ )
|