custom workload
#1
Hi.

I want to have the same GPU running multiple instances of hashcat, allocating custom workloads for each instance.

For example:
  • instance 1: 20% of the workload
  • instance 2: 30% of the workload
  • instance 3: 50% of the workload
Is that possible? If so, how?
I suspect the --kernel-accel and --kernel-loops options might be useful for this, but I don't fully understand these options so I need some help.
#2
What is the use case? If you're trying to make a set of attacks more efficient for a given period of time, you'd be better off running the first attack for 20% of your timeframe, the second for 30%, and the third for 50%.
~
#3
Thanks for your reply royce.
Your idea is not bad. But ideally, I'd like to have multiple instances running in parallel. I don't want to establish a timeframe a priori.

I have come up with a metric that gives me the likelihood of an attack mode cracking a hash given some training samples (I don't really want to get into the details of it because it could easily hijack the thread's main topic). My idea is to have one instance of hashcat for each attack mode, and to balance the workload proportionally to each likelihood.
#4
Interesting. I'm not familiar with a way to divide up hashcat's resources in this way.

The only workaround that I know of would be to use --session and --restore for each of the three instances, and rotate among them (say, with a "slice' of one hour for each). This wouldn't bind you to a specific timeframe, and you could adjust the size of the slice depending on your use case.

[Edit: and --runtime to limit the runtime to the desired slice]
~
#5
GPUs do not have complex schedulers like CPUs do. There is no context switching on a GPU. Trying to run multiple compute jobs on a GPU usually results in an ASIC hang, but if it doesn't, it will take much, much longer to run everything than just running the batches serially.

http://stackoverflow.com/questions/66055...ism-in-gpu
#6
I guess I'll have to stick to royce's suggestion then.

Thanks a lot guys!
#7
In case your goal is to have a priority solution it's much easier to handle if you create small packages using -s and -l so that you can switch to a different active package in a small timeframe.