After a while of playing around with differen vast.ai GPUs i dont think it is that easy anymore.
I would be glad if someone could help me answering this questions:
Thanks for sharing your expierence and best regards
I would be glad if someone could help me answering this questions:
- I have one instance with 4 times RTX 3080 wich is shown as 117.0 TFLOPS which runs my bcrypt hashes with 840 H/s = 3360.
Another instance with 8 times RTX 2080ti is also shown as 117.0 TFLOPS but each instance is at 343 H/s = 2743 in Sum.
For me it looks like it is not just a matter of "power" but also some other specific criteria. someone know more about this?
- I run an instance of the rtx 4090 which was very powerful (7k H/s) for one hash and a big wordlist.
But the same instance/hardware was just at 2k H/s when i used it with 90 hashes and the same wordlist.
Why isnt hashcat just going through the hashes one by one if it would be almost 3 times faster if you look at the H/s?
- And as far as i know bcrypt hashing has some specific hardware requirements (it needs alot of vram?). Thus i would ask the same question again: Is it better to use one powerful hardware that has alot of vram or is it more effizient to use alot of less powerful GPUs because they have the better "storage" ratio?
- Does anyone have a similiar scenario or knowledge and can recommend hardware for this usecase (alot of hashes, wordlist attacks and a workfactor of 10 and 12)?
What about the datacenter GPUs A100 and H100 are they better/more effizient than RTX GPUs?
And for the RTXs are the higher generations more efficient than a bunch of lower generations?
Thanks for sharing your expierence and best regards