Cloud GPU price(estimated) for NTLM hash type
#1
Hello forum

Since I do not own the required hardware to test even this relatively simple NTLM hash type, I am wondering how much the estimated costs of a cloud GPU could be....



- the password does consist in; a-zA-Z0-9 and (the following 3 chars) - _ & 
- max password length: 10 chars


-> 1,346,274,334,462,890,625 / 117,400,000,000 = 133 days of full time 24 hours per day cracking to go through them all.


Any ideas?


Thank you very much for your feedback!


Joe
Reply
#2
(08-11-2022, 04:17 PM)joe123 Wrote: Hello forum

Since I do not own the required hardware to test even this relatively simple NTLM hash type, I am wondering how much the estimated costs of a cloud GPU could be....



- the password does consist in; a-zA-Z0-9 and (the following 3 chars) - _ & 
- max password length: 10 chars


-> 1,346,274,334,462,890,625 / 117,400,000,000 = 133 days of full time 24 hours per day cracking to go through them all.


Any ideas?


Thank you very much for your feedback!


Joe

Let's say $0.40/USD/hour gets you 117GH/s NTLM, then your 133 days equals $0.40*24 hours*133 days= $1,276.80USD (adjust accordingly if you get a better or worse deal)

That's for the *worst* case, where the password is completely random *and* it happens to be the last combination in the entire keyspace. If the password is human generated it'll likely fall in minutes.
Reply
#3
this is your third, fourth thread about the same topic

how or why do you know that the pass includes only these 3 special chars?

bruteforcing is the last method when trying to crack a hash, any password beyond length 9-10 is way to much for plain bruteforce all*, regardless which algorithm you choose (yeah yeah md4 maybe)

*given that only plain ascii was used, take into account german umlauts or french accent like äöü ê, you will never be able to crack a hash of a simple word like "öl" even with a mask of ?a?a?a?a?a?a because such chars doesnt belong to the standard charset

i believe you will waste money and time on this topic
Reply
#4
Greetings

A most intriguing thread, I must say — one that appears to have aged gracefully with time. Concerning the capabilities of this most fascinating tool, we — the ever-curious human element — are destined to approach the matter of password recovery with a certain unshaken optimism. I confess, my reading upon this pursuit — the endeavour of breaching what is deemed unbreachable — remains incomplete. Yet, with each page turned, the mystery only deepens.

Allow me to elaborate: regarding hash mode 11300, the contemporaries — notably BtcRecover — labour considerably longer under equivalent conditions, taking no less than six hours to complete an equivalent task. This, tested across four distinct examples, with four unique wallets, I assure you. Admittedly, Hashcat approaches the problem from a different vantage — attacking the hash itself — and with elegance, I might add, particularly through its application of Markov Chains. A most graceful principle.

However, its counterpart, though decidedly more cumbersome — three times slower, to be precise — allows for a slightly more nuanced methodology: by curating several tens of millions of unique passwords and submitting them to analytical scrutiny, I obtained the sixteen most probable characters for each position. Limited though I was to nine relative positions, the results, I must confess, were quite satisfactory, as demonstrated by my btcrecover-tokens-auto.txt example:

%1[sam1#%$cbdptlkr0]%1[aeoiur$#%lhn129s]%1[arnelsitomcb1$#%]%1[aeitnobr1sldABkm]%1[ae1inorstlb209mh]%1[1eainr9o0s2lt364]%1[0ae91in2orstl837]%1[0e1a2ino9r3s8457]%1[0e1a2nir3s9o87t5]%1[0e.a1nirs2ot39l8]%1[e0nas1ir.to23l98]%1[ena.srito12l309d]%1[enasri.to1l20d-c]%1[ensaroit1mcl20dg]%1[ensraotmi12ldg03]%1[nsemratg13dl24o0]

In essence, this was as close as I could come to emulating a probabilistic model akin to Markov chains — outside of Hashcat’s structure.

Within Hashcat, custom charsets can, of course, be defined — yet, unless I’m mistaken, we remain constrained to eight. One may blend the character occurrences judiciously, yet therein lies the dilemma: should a character dominate position three but not nine, and if optimization necessitates a limit of sixteen characters per set, we find ourselves forced into either an overly narrow cluster… or a generalized, inefficient alphabet.

Might I inquire — am I in error? Or has some amongst you found a more refined approach within Hashcat's parameters?

One further curiosity lingers: is it possible to combine more than two dictionaries in a singular, sophisticated attack? Ideally, employing custom charsets and rule sets, in such a manner:

custom_char + word from dictionary 1 + custom_char + word from dictionary 2 + custom_char + word from dictionary 3, all while applying substitution rules, or capitalization of selective characters via a dedicated rules file.

A complex choreography, I admit — but a most interesting one.

In conclusion, I find this exploration rather… pleasurable. Should any of you possess insight — and the inclination to share — I remain most appreciative.

My thanks in advance.
>
Reply
#5
can you post a TL;DR please? or was it just a gimmick ? your AI-generated-post is giving me dizzyness
why did you hijack this old post from three years ago, instead of creating a new one ?

meh
Reply
#6
(07-22-2025, 11:31 AM)Banaanhangwagen Wrote: can you post a TL;DR please? or was it just a gimmick ? your AI-generated-post is giving me dizzyness
why did you hijack this old post from three years ago, instead of creating a new one ?

meh

I apologize, there must be some confusion - AI?

I published a comparative analysis between Hashcat and Btcrecover, focusing on performance in 11300 hash mode and the potential of Markov chain-based approaches using character occurrence probabilities. I also explored combinations of dictionaries with custom character sets and rule application strategies for password recovery. Perhaps a bit involved, but out of curiosity.

As for reviving this old post, I thought it was relevant to the topic. If I disturbed the peace, I assure you that was not my intention. It was simply an attempt to provide a detailed contribution that had previously sparked interest.

Continue if you'd like. Or scroll down to the next level.

Sincerely,
>
Reply
#7
I have no idea what the point of the abstruse writing style is but what you're describing, or attempting to describe, is somewhat how hashcat's markov chains already work. Our markov chains are "per position", meaning that the model and character sets likely do not behave the way you think they do.

See here for a potentially useful explanation of a related flag, -t. https://hashcat.net/wiki/doku.php?id=fre...sholds_for

In the examples, you will notice that if you were to set a threshold values with -t you could nearly emulate exactly the behavior you are describing, only it _also_ takes into account the order of the preceeding characters and not just the frequency in it's position making it even more powerful imo. In english, h often follows t, but that doesn't make it the most likely second character necessarily, so having it order each position not only on frequency but the preceeding position is a significant boon.

>"I obtained the sixteen most probable characters for each position"

Great, that sounds like a slightly more limited version of -t 16.
Reply
#8
Hi, I'm running Hashcat in hash mode 22000 (WPA-PMKID+EAPOL) and achieving 3,328 kH/s on a single NVIDIA GeForce RTX 5090 GPU. Would I see improved performance on an NVIDIA DGX Spark system or an NVIDIA RTX PRO 6000 Blackwell GPU? If anyone has benchmarks or direct experience comparing these, I'd love to hear about it!
Reply
#9
(07-24-2025, 09:01 PM)TxSniper Wrote: Hi, I'm running Hashcat in hash mode 22000 (WPA-PMKID+EAPOL) and achieving 3,328 kH/s on a single NVIDIA GeForce RTX 5090 GPU. Would I see improved performance on an NVIDIA DGX Spark system or an NVIDIA RTX PRO 6000 Blackwell GPU? If anyone has benchmarks or direct experience comparing these, I'd love to hear about it!

Probably best not to hijack an existing thread but I can answer this one quickly enough: The 5090 will be the best card for what you are doing. The other cards will not be faster, at least not consistently so, and will cost multiple times as much for effectively no reason when it comes to hashcat. Stick to the 5090.
Reply
#10
Thank you for the reply. Brand new to this forum. I will not hijack threads in the future.
Reply