Posts: 5
Threads: 1
Joined: Dec 2022
12-20-2022, 12:09 PM
I know that this has been addressed before, but in the threads I've seen, it was said that there had been improvements, so I'm not sure if --show plus --usernames is still as unfeasible as before.
I know that the sorting involved with --usernames is the culprit, but I don't have an alternative.
Other ideas that I've seen revolve around doing --show without --usernames and then doing subsequent sorting.
To that:
1. How could I associate the username:hash output with hashcat's hash:password output?
2. I can't even do a --show without --usernames because hashcat (rightly) complains about token length exception
I have 34.236.607 lines in my potfile.
Help would be greatly appreciated. Thank you in advance.
Posts: 889
Threads: 15
Joined: Sep 2017
it is maybe to late for this, but you could try to split your potfile regarding to the used hashmodes
i know some basics like md5 and ntlm are the same size, but i hope you have also enough of salted hashmodes, this would reduce the assumed overhead by hashcat to check whether the searched hash matches or not
Posts: 5
Threads: 1
Joined: Dec 2022
Posts: 5
Threads: 1
Joined: Dec 2022
I've gotten further, but still have a problem.
I've used my source as user:hash
I've used my potfile as hash:password
I've sorted both by the hash column. I could theoretically join them, but my problem is that the potfile has less lines than the source file, because all the hashes in the potfile are unique. So whilst in the source file there might be two lines for two users each using the same password, the potfile will only have one line (because it's only one hash).
This means that currently I can't join the user:hash file and the hash:password file properly :-(
Posts: 889
Threads: 15
Joined: Sep 2017
sry was a little bit fast, for sure you could use a simple python script for that, should be fast enough even for this size (i hope)
how big are your files (i assume somtehing around 2.5gb each)?
Posts: 5
Threads: 1
Joined: Dec 2022
hash_password_sorted.txt is 1.7GB
user_hash_sorted.txt is 3.4 GB
Posts: 5
Threads: 1
Joined: Dec 2022
How is this problem dealt with normally?
Posts: 119
Threads: 1
Joined: Apr 2022
12-22-2022, 03:39 PM
(This post was last modified: 12-22-2022, 03:44 PM by b8vr.)
(12-21-2022, 05:21 PM)dikembe Wrote: How is this problem dealt with normally?
As
Snoopy wrote, you should make some sort of script.
You could possibly use something like this linux bash script, but python may be a faster choice:
Code:
#!/bin/bash
mapfile -t source < $1
mapfile -t potfile < $2
for i in "${!source[@]}"
do
for x in "${!potfile[@]}"
do
hash1=`echo "${source[$i]}" | cut -d":" -f2`
hash2=`echo "${potfile[$x]}" | cut -d":" -f1`
if [ "$hash1" == "$hash2" ]
then
if [ "$hash1" != "" ]
then
plain=`echo "${potfile[$x]}" | cut -d":" -f2-`
printf "${source[$i]}:$plain\n"
break
fi
fi
done
done
This script would be executed like
Quote:./concat.sh sourcefile potfile > concat.txt