maybe you are confusing the hashcat brain with some hashcat wrapper like hashtopolis that tries to distribute work across multiple rigs/systems ?
brain and distributed cracking wrappers/systems actually have very different goals they achieve and when they should be used. I'm not sure why but it seems several people confuse one with the other. They are not the same.
The hashcat brain should only be used with slow hashes (in general) and is mainly used to avoid duplicated work, while systems like hashtopolis just take for instance a mask and split the work across all the systems you configured.
This means that brain makes for instance sense to try very different attacks and wordlists/rules etc for a very slow hash like bcrypt etc where it is kind of a problem if you try the same password candidate multiple times (because the duplicated work hurts a lot).
BTW: For brain we introduced a new command line parameter "--brain-server-timer 0" in the latest development version (currently only in beta: https://hashcat.net/beta/) that avoids writing to disk too frequently (with the timer set to 0 only at the very end, this is of course very risky, because you have no backups in between) to avoid too much disk I/O.
The problem with bitcoin hashes is that they have very different cost factors/iteration counts and therefore could be quite fast too, maybe too fast for a good rig together with hashcat brain if you do not use *at least* --brain-client-features 2).
I would suggest that you do some research about hashcat overlays/wrappers like hashtopolis and see if that is actually what you are trying to do (i.e. distribute work instead of avoiding to duplicate work with different attacks, especially if you only run a single mask etc)
brain and distributed cracking wrappers/systems actually have very different goals they achieve and when they should be used. I'm not sure why but it seems several people confuse one with the other. They are not the same.
The hashcat brain should only be used with slow hashes (in general) and is mainly used to avoid duplicated work, while systems like hashtopolis just take for instance a mask and split the work across all the systems you configured.
This means that brain makes for instance sense to try very different attacks and wordlists/rules etc for a very slow hash like bcrypt etc where it is kind of a problem if you try the same password candidate multiple times (because the duplicated work hurts a lot).
BTW: For brain we introduced a new command line parameter "--brain-server-timer 0" in the latest development version (currently only in beta: https://hashcat.net/beta/) that avoids writing to disk too frequently (with the timer set to 0 only at the very end, this is of course very risky, because you have no backups in between) to avoid too much disk I/O.
The problem with bitcoin hashes is that they have very different cost factors/iteration counts and therefore could be quite fast too, maybe too fast for a good rig together with hashcat brain if you do not use *at least* --brain-client-features 2).
I would suggest that you do some research about hashcat overlays/wrappers like hashtopolis and see if that is actually what you are trying to do (i.e. distribute work instead of avoiding to duplicate work with different attacks, especially if you only run a single mask etc)