Keyspace List for WPA on Default Routers
(06-27-2020, 08:53 PM)fart-box Wrote:
Quote:A book can't possibly be 37^11 (times 13 characters), that's probably more storage than atoms in the universe!

That's why we make a word-list (or a key-gen).

And I misspoke a couple of posts ago, referring to "books" containing 37^11 passwords. Please replace the word "books" with the word "chapters" in that post.

Books contain "chapters", (one chapter for each leading character), and each chapter contains 37^11 passwords, so the NVG589 book, for instance, contains 6 chapters, with each chapter containing 37^11 passwords. And don't forget, we have to stack those books until we have 1e19 lines (or passwords), even though we're not going to count every single line because we have to stop somewhere.

That's why we find a "seed", and why that seed must be eight to ten digits in length. The seed allows us to skip over all the stuff we don't want, (useless passwords consuming massive amounts of space) and just keep the good stuff.

As I stated back when Royce re-opened this thread, the proper seeds will create word-lists that each contain just over twelve billion passwords. Each password contains twelve characters, plus a new line byte, so thirteen times twelve billion makes a word-list around 165 Gb in size. If stored as files, you'll need 500 Gb of storage space to store all three word-lists. (The math using these figures comes to around 145 Gb per word-list, but these figures are not exact. The actual size on disk is right around 165 Gb per word-list.)

Twelve billion passwords sounds like a lot, but I use one particular computer with a single GPU card to test everything because it tests about 1,000 hashes per second, which makes doing the math pretty simple. Cracking one single four part handshake using any one of those 165 Gb word-lists can be done in under 24 hours on that machine alone. Naturally, that time is substantially reduced when I fire up the other machines, but 1,000 hashes per second makes it easier for you to calculate the speeds your rig will attain.

And one more thing... I don't know if you've read this entire thread, or if you've paid attention, but Mr. Fancypants is responsible for all of the original work, which was done in Python, and even though he made some mistakes, Soxrok simply took on the task of converting the Python code to C, mistakes included. In his own words, Mr. Fancypants "just got lucky" in finding a seed. I've always put my faith into a more mathematical solution.

The point being,  you haven't just generated "the wrong" dictionaries. They will work, sometimes, if you "just get lucky". You've got 2,147,483,647 chances to get lucky, or you can opt for the mathematical solution and have 12 billion chances to get it right every time.

Just curious, have you tried compressing all of those wordlists, compressed into a zip or gzip file, and then loading/using them with Hashcat 6?  I haven't tested this latest feature of Hashcat 6, the ability to use wordlists in a zip of gzip file, but maybe with this latest feature, the amount of required disk space can be reduced by quite a bit.

Plain text files usually compress well, so this might be worth checking.
Reply


Messages In This Thread