Safe power draw
#11
Nope, not bad for the cards at all. It's simply a matter of efficiency. 80Plus Platinum is pretty good, and it looks like your PSU is probably around 90% efficient at that load (not sure where you got 86% from.) 1200W * 0.9 = 1080W, which is 120W below what your PSU is rated at being able to deliver. This is perfectly fine.

If you want to save on your electric bill, you could go with a larger, 80Plus Titanium PSU (like EVGA SuperNOVA 1600 T2), which would likely only draw around 1148W from the wall under the same load (saving you 52W/hr.)

This may also be important if there are other things on the circuit you are plugged into. I'm assuming this is a standard 20A circuit, and you need to stay below 80% load on the circuit for safety. Further assuming you're on 120V power, this means you should draw no more than 1920W total on this circuit. If the computer alone is drawing 1200W, that leaves you with only 720W for other things. So like, don't turn on the microwave Wink A larger, more efficient PSU would give you a bit more headroom on the circuit for other devices.
Reply
#12
(10-07-2018, 07:35 PM)undeath Wrote:
(10-07-2018, 06:21 PM)elidell Wrote: why would i need anymore than 8gb of ram on the motherboard?  its literally just running hashcat on ubuntu in headless mode?

afaik the nvidia drivers have some weird behaviour where they map all the vram used by hashcat to your system ram.

btw, the true power draw test is probably running -m1000 -a3 ?a?a?a?a?a?a?a?a?a?a -w4 -O

So come to find out that due to the limitations of the motherboard I'm using, I can only put in up to 8gb of ram in this rig.  Obviously the board is designed for crypto mining not hashcat, so that limitation would be fine.  Before I ditch this board, would adding a 56gb swapfile help this situation or does it have to be actual RAM? I could try it but, away from home at the moment.
Reply
#13
Adding a large swap file will prevent Hashcat from crashing, yes. But it will be slooooooooowwwwwww.
Reply
#14
(10-07-2018, 08:43 PM)epixoip Wrote: It's not weird behavior and it's not specific to Nvidia by any means. You need host buffers for the device buffers. When you allocate a buffer on the device, you have to allocate a buffer on the host as well. Otherwise how else would you get the results? If you have 8GB of VRAM per device, you could conceivably allocate 100% of that (yes, a misinterpretation of the OpenCL spec results in most vendors limiting each allocation to 25% of VRAM, but you can in fact allocate multiple buffers.) If you're allocating 8GB of buffers on each device, and you have eight devices, you could conceivably need to allocate 64GB of buffers on the host as well. If the host cannot allocate 64GB of RAM for Hashcat, it will crash with CL_OUT_OF_HOST_MEMORY. At an absolute minimum available host memory should never be less than VRAM * 0.25, but the rule of thumb is RAM >= VRAM.

Keep in mind that password cracking is not cryptocurrency mining, and most mining rig setups suck for password cracking.
What would be the benefit of allocating multiple buffers?
Reply
#15
As epixoip mentioned there is a limit for several OpenCL drivers that each memory allocation can't be more than 1/4 of the available VRAM. if you, in theory, allocate 4 times (multiple times) you have 4 * 1/4 = 1 and therefore could in theory use the full VRAM available.... The problem is that there is extra cost (and speed drop) in maintaining more than one separate buffers/allocations, so that is not really a good solution except for a few exceptions like scrypt etc (where we have some work around in place already)
The benefit of large GPU memory buffers is of course to load more digests, have more room for longer rules (with more rule functions), allow different TMTO settings for algos like scrypt etc.
The problem often is that the GPU has e.g. 8 GB ram, but you can only allocate 1/4 * 8 = 2 GB of 8 GB (with one single allocation/buffer) because of that silly restriction (miss interpretation of the spec).
Reply
#16
(10-17-2018, 08:59 AM)philsmd Wrote: As epixoip mentioned there is a limit for several OpenCL drivers that each memory allocation can't be more than 1/4 of the available VRAM. if you, in theory, allocate 4 times (multiple times) you have 4 * 1/4 = 1 and therefore could in theory use the full VRAM available.... The problem is that there is extra cost (and speed drop) in maintaining more than one separate buffers/allocations, so that is not really a good solution except for a few exceptions like scrypt etc (where we have some work around in place already)
The benefit of large GPU memory buffers is of course to load more digests, have more room for longer rules (with more rule functions), allow different TMTO settings for algos like scrypt etc.
The problem often is that the GPU has e.g. 8 GB ram, but you can only allocate 1/4 * 8 = 2 GB of 8 GB (with one single allocation/buffer) because of that silly restriction (miss interpretation of the spec).
Thanks for explaining.
Reply