06-10-2012, 12:59 AM
Hello, first post on these forums! Thanks for the General Talk section I think it's a great idea especially for shy people like me.
Have you tried l517? It's for Windows but the author states it should work fine under Wine (performance might take a hit working with huge lists though). It doesn't seem as complete as ULM but it's worth a try.
I've been out of the loop for a couple years and I'm getting back up to date with the new wordlists and all (will be getting new computer soon ). Back then I always used Unix commands since they're simply the fastest and most reliable. This all comes from reusable security, an awesome blog I used to follow. Worth a read even though it's inactive (though you all probably know it).
All super basic stuff taken mostly from reusec and around the web. This is my little help file I use when working with dicts.
Occurrence sorting:
cat file.txt | sort | uniq -c | sort -nr > sorted.txt
Merge files:
aspell dump master > custom-wordlist
cat /usr/share/john/password.lst >> custom-wordlist
cat /usr/share/dict/american-english* >> custom-wordlist
Count words:
wc -l custom-wordlist
Lower case everything:
tr A-Z a-z < custom-wordlist.txt > custom-wordlist_lowercase
Remove duplicates:
sort -u custom-wordlist_lowercase > custom-wordlist_lowercase_nodups
Making a dictionary from a text:
cat KJbible/* | tr -cs A-Za-z '\012' | tr A-Z a-z | sort | uniq
Remove any line lower or equal to the character length (N), and pipe into a file:
awk 'length > N' dictionary.txt > new_dictionary.txt
Feel free to laugh, or suggest better/more ways to work with the dict files using Unix commands (I'm sure you guys do great stuff with grep). Have a good day!
Have you tried l517? It's for Windows but the author states it should work fine under Wine (performance might take a hit working with huge lists though). It doesn't seem as complete as ULM but it's worth a try.
I've been out of the loop for a couple years and I'm getting back up to date with the new wordlists and all (will be getting new computer soon ). Back then I always used Unix commands since they're simply the fastest and most reliable. This all comes from reusable security, an awesome blog I used to follow. Worth a read even though it's inactive (though you all probably know it).
All super basic stuff taken mostly from reusec and around the web. This is my little help file I use when working with dicts.
Occurrence sorting:
cat file.txt | sort | uniq -c | sort -nr > sorted.txt
Merge files:
aspell dump master > custom-wordlist
cat /usr/share/john/password.lst >> custom-wordlist
cat /usr/share/dict/american-english* >> custom-wordlist
Count words:
wc -l custom-wordlist
Lower case everything:
tr A-Z a-z < custom-wordlist.txt > custom-wordlist_lowercase
Remove duplicates:
sort -u custom-wordlist_lowercase > custom-wordlist_lowercase_nodups
Making a dictionary from a text:
cat KJbible/* | tr -cs A-Za-z '\012' | tr A-Z a-z | sort | uniq
Remove any line lower or equal to the character length (N), and pipe into a file:
awk 'length > N' dictionary.txt > new_dictionary.txt
Feel free to laugh, or suggest better/more ways to work with the dict files using Unix commands (I'm sure you guys do great stuff with grep). Have a good day!