Attack planning: How to avoid doubles?
#1
When I use Hashcat I use various types of attacks.
I use a mask attack for a small keyspace.
If that does not work I use a hybrid attack.
After another hybrid attack using different .rule files or a combo of .rule files.
When that does not work I maybe try a prince attack.

How do I avoid duplicate trials or overlap?

I noticed that when I do a mask attack for a small keyspace the hybrid attack sometimes includes doubles I already did with the mask attack.

I wonder what is the best way to manage the work already done, discard this from future attacks?

I also could not find a way to store ALL the hashes generated by hashcat, also the failed hashes for later re-use. How can I store each hash for later re-use?

I'd like to store the hashes in a file for later use in Hashcat on a different file and this way speed up the next attack. But I see the .pot file only stores cracked hashes. Not all of the hashes.

These questions are more related to planning but I feel I am not having the most efficient work method right now and I am looking for ways to improve this.
Reply
#2
first on storing generated hashes

this would be the oldschool technic known as rainbowtables, first problem, isnt working well with salted hashes. second problem storage, see https://www.freerainbowtables.com/ to get an idea, now take into consideration you have to store this for every hash type you will ever attack and newer hashes are longer than md5 or ntlm, if you want to do so, you can use maskprocessor feed the output to any tool like md5sum, shasum whatever and build your dicts for unsalted hashes, but dont forget to tripple backup, a crashed hdd, raid or whatever and all the work is gone

second on planning attacks
i use two approaches
first one, i made batch files for starting attacks, where i change things like hashlist, wordlist, maskfiles and so on with vars and echo these vars into a file named after the attack-target as hashlist.info, so i have a basic overview of work already done, this is good enough for fast hashes and things like masks of length 8 maybe more (runs that will take maybe up to 2-3 hours max)
second one, you can use hashcat the brain to take track of already testet hashes, brain mode 1 or 3, but the brain comes at the cost of loosing some attack speed (implies -S), so it depends on your attack and hashmode. there are some things to know when using the brain, never rename or alter the attack-target (--remove is forbidden for that cause) because the brain uses these data to keep track of the attack target (at least it was this way when it was introduced) but you cant test it by yourself, just take a short hashlist, copy it with antoher name and start the same attack on both files, a fast one like --increment ?l?l?l? and see, the copied/renamed file should reject no candidate
Reply
#3
Thanks. I read about a new feature called Hashcat 'brain'.

The case is I have already done a bunch of tests before I learned about brain.
Is there a way to manually load these into the brain database so I do not have to repeat these long tests?

I could not find any documentation about where to find the information store in Hashcat brain.
Reply
#4
(02-01-2022, 01:26 AM)Snoopy Wrote: you can use maskprocessor feed the output to any tool like md5sum, shasum whatever and build your dicts for

What would be the hashcat command line string for storing this output?
Reply
#5
(02-04-2022, 02:49 AM)hamano_clevage Wrote:
(02-01-2022, 01:26 AM)Snoopy Wrote: you can use maskprocessor feed the output to any tool like md5sum, shasum whatever and build your dicts for

What would be the hashcat command line string for storing this output?

there is not option for this, you can download pregenerated files here

https://www.freerainbowtables.com/

to generate your own from a textfile you can use this little bash-script

Code:
#!/bin/bash

cat "$@" | while read -r line; do

    MD5_PW=$(printf %s "$line" | md5sum | cut -d ' ' -f1)
    echo "$MD5_PW:$line" >> md5_pw.txt
done

save this as gen-md5.sh and run it like

./gen-md5.sh input.txt

will generate a md5_pw.txt, sample output with input.txt just containing lower chars a-z, one per line

Code:
0cc175b9c0f1b6a831c399e269772661:a
92eb5ffee6ae2fec3ad71c777531578f:b
4a8a08f09d37b73795649038408b5f33:c
8277e0910d750195b448797616e091ad:d
e1671797c52e15f763380b45e841ec32:e
8fa14cdd754f91cc6554c9e71929cce7:f
b2f5ff47436671b6e533d8dc3614845d:g
2510c39011c5be704182423e3a695e91:h
865c0c0b4ab0e063e5caa3387c1a8741:i
363b122c528f54df4a0446b6bab05515:j
8ce4b16b22b58894aa86c421e8759df3:k
2db95e8e1a9267b7a1188556b2013b33:l
6f8f57715090da2632453988d9a1501b:m
7b8b965ad4bca0e41ab51de7b31363a1:n
d95679752134a2d9eb61dbd7b91c4bcc:o
83878c91171338902e0fe0fb97a8c47a:p
7694f4a66316e53c8cdd9d9954bd611d:q
4b43b0aee35624cd95b910189b3dc231:r
03c7c0ace395d80182db07ae2c30f034:s
e358efa489f58062f10dd7316b65649e:t
7b774effe4a349c6dd82ad4f4f21d34c:u
9e3669d19b675bd57058fd4664205d2a:v
f1290186a5d0b1ceab27f4e77c0c5d68:w
9dd4e461268c8034f5c8564e155c67a6:x
415290769594460e2e485922904f345d:y
fbade9e36a3f36d3d676c1b808451dd7:z

and copy this into or just rename to md5.potfile

BUT trust me, you dont want to do this on a bigger scale, generating these lists is much much much slower than just testing the input wordlist (at least with fast hashes like md5)

second, regarding brain, no there is no option to tell the brain what you already have tried, the brain was developed mainly with slow hashes in mind and very specific cracking tasks
Reply
#6
(02-04-2022, 02:26 PM)Snoopy Wrote: [quote="hamano_clevage" pid='54654' dateline='1643935778']
[quote="Snoopy" pid='54625' dateline='1643671617']
second, regarding brain, no there is no option to tell the brain what you already have tried, the brain was developed mainly with slow hashes in mind and very specific cracking tasks

Smile Awesome! Thanks for the info.

Do you know what documentation have been published about brain? I'd like to learn more details about its options, file location etc.
Reply
#7
the brain was introduced with hc 5.0

see release notes here

https://hashcat.net/forum/thread-7903.html

also see the readme on Github (i think its quite similar to the release note)
https://github.com/hashcat/hashcat/blob/...t-brain.md

the brain works on client/server priciple, so you need one instance of hashcat providung the server side part (can be running on the same machine if needed)
Reply