Intermittent slowness with Hashtopolis
#1
I have a list of 61500 hashes for the AuthMe algorithm which I'm trying to recover. I'm using Hashtopolis to distribute a work across multiple machines. The attack command that I use is "-a 3 #HL# -3 ?l?d ?l?l?l?l?3?3?3?3 -O -w4" and I have a few 4090s which are tasked with this project.

I've noticed that under certain circumstances the hashing speed can almost reach the benchmark, but more often than not it is significantly slower. It makes sense to me to have the hashing speed be slower as the GPUs have to go over all the hashes at once and this would definitely incur some speed penalty.

What is strange is that, right now for example, one of the GPUs is hashing at 3188 MH/s, while the other identical ones in identical machines are just doing around 1800 MH/s. I understand that using hashtopolis may not be particularly supported or endorsed, but it generally scales well with raw brute force attacks.

I have found a way to circumvent this behavior by specifying the fact that this hash is a slow hash, and therefore would generate the password candidates on the CPU. While this levels out the speed generally, it is overall slower and it makes sense to leave it like it is now. 

The machines are running Arch Linux and they are up to date. I have not made any changes to system settings that I think would affect the GPUs. 

What could be the reason behind this behavior?
Reply
#2
Solved! The problem was specific to Hashtopolis, in the sense that the benchmark caused the chunks to be too small and therefore not enough work was attributed to the devices. Sometimes the benchmark returned correct results which caused the speed to increase. If the chunks are manually specified at the large enough size then the speed is even greater and almost reaches the theoretical benchmarks regardless of the large number of hashes located at the same time.
Reply