Cloud GPU price(estimated) for NTLM hash type
#4
Greetings

A most intriguing thread, I must say — one that appears to have aged gracefully with time. Concerning the capabilities of this most fascinating tool, we — the ever-curious human element — are destined to approach the matter of password recovery with a certain unshaken optimism. I confess, my reading upon this pursuit — the endeavour of breaching what is deemed unbreachable — remains incomplete. Yet, with each page turned, the mystery only deepens.

Allow me to elaborate: regarding hash mode 11300, the contemporaries — notably BtcRecover — labour considerably longer under equivalent conditions, taking no less than six hours to complete an equivalent task. This, tested across four distinct examples, with four unique wallets, I assure you. Admittedly, Hashcat approaches the problem from a different vantage — attacking the hash itself — and with elegance, I might add, particularly through its application of Markov Chains. A most graceful principle.

However, its counterpart, though decidedly more cumbersome — three times slower, to be precise — allows for a slightly more nuanced methodology: by curating several tens of millions of unique passwords and submitting them to analytical scrutiny, I obtained the sixteen most probable characters for each position. Limited though I was to nine relative positions, the results, I must confess, were quite satisfactory, as demonstrated by my btcrecover-tokens-auto.txt example:

%1[sam1#%$cbdptlkr0]%1[aeoiur$#%lhn129s]%1[arnelsitomcb1$#%]%1[aeitnobr1sldABkm]%1[ae1inorstlb209mh]%1[1eainr9o0s2lt364]%1[0ae91in2orstl837]%1[0e1a2ino9r3s8457]%1[0e1a2nir3s9o87t5]%1[0e.a1nirs2ot39l8]%1[e0nas1ir.to23l98]%1[ena.srito12l309d]%1[enasri.to1l20d-c]%1[ensaroit1mcl20dg]%1[ensraotmi12ldg03]%1[nsemratg13dl24o0]

In essence, this was as close as I could come to emulating a probabilistic model akin to Markov chains — outside of Hashcat’s structure.

Within Hashcat, custom charsets can, of course, be defined — yet, unless I’m mistaken, we remain constrained to eight. One may blend the character occurrences judiciously, yet therein lies the dilemma: should a character dominate position three but not nine, and if optimization necessitates a limit of sixteen characters per set, we find ourselves forced into either an overly narrow cluster… or a generalized, inefficient alphabet.

Might I inquire — am I in error? Or has some amongst you found a more refined approach within Hashcat's parameters?

One further curiosity lingers: is it possible to combine more than two dictionaries in a singular, sophisticated attack? Ideally, employing custom charsets and rule sets, in such a manner:

custom_char + word from dictionary 1 + custom_char + word from dictionary 2 + custom_char + word from dictionary 3, all while applying substitution rules, or capitalization of selective characters via a dedicated rules file.

A complex choreography, I admit — but a most interesting one.

In conclusion, I find this exploration rather… pleasurable. Should any of you possess insight — and the inclination to share — I remain most appreciative.

My thanks in advance.
>
Reply


Messages In This Thread
RE: Cloud GPU price(estimated) for NTLM hash type - by GuyWinston48 - 07-22-2025, 10:02 AM