hashcat Forum

Full Version: What is the storage requirement for OS to run hashcat?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm looking to test a rig with 4-6 GPUs and i'm wondering if i can use USB persistent Linux build as OS.

Does Hashcat use alot of IOs from the local OS?

Can i offset the storage data such as dictionary lists to an iSCSI/AoE server? Would that be a bottle neck due to the slower speed than a local SSD?

I'm also reading through a distributed setup called Hashtopus, i'm not clear about the storage for an agent also. Dont they get to share the same data from a SAN? or they each have a copy of the data to work on?

I'm sorry if these questions have been answered. I tried to use the search function on the forum with no success.

Best regards,
I've never actually tried running hashcat from a live USB - interesting.

hashcat is dependent on I/O for some kinds of attacks, but not others. If it is a fast hash and a straight dictionary, you can bottleneck on I/O. If it's a slow hash, this is less likely. The best way to measure the break-even point for your setup is to test it directly.

Distributed platforms like Hashtopus, Hashtopussy, Hashview, etc. vary. Some of them will distribute a wordlist for you - one copy to each node. Others leave this up to you.
Don't forget writing to potfile, rewriting hashlist when using --remove, etc. can all present a bottleneck when using a storage device with low throughput.
(02-11-2016, 07:26 AM)mamexp Wrote: [ -> ]I'm looking to test a rig with 4-6 GPUs and i'm wondering if i can use USB persistent Linux build as OS.

Does Hashcat use alot of IOs from the local OS?

Can i offset the storage data such as dictionary lists to an iSCSI/AoE server? Would that be a bottle neck due to the slower speed than a local SSD?

I'm also reading through a distributed setup called Hashtopus, i'm not clear about the storage for an agent also. Dont they get to share the same data from a SAN? or they each have a copy of the data to work on?

I'm sorry if these questions have been answered. I tried to use the search function on the forum with no success.

Best regards,

I have successfully done this close to 4 years ago, it wasn't pretty but it worked.

You will need to not use --remove, specify a potfile either on ramdisk or on high speed NFS/SAN, and loading wordlists will be your biggest problem, but you can run off a USB or even PXE boot and do all the storage off network or ramdisk, using rules as amplifiers or running masks becomes your friend in this scenario.

10 Gbit networking or infiniband takes a lot of the sting out though.

If you are running hashtopus or hashtopussy it downloads all its wordlists and puts its potfile and zapfiles on the directory you download it to, if you were running that I'd put your files, hashes and zaps directory on extremely high speed NFS, or just get a local caching hard drive.  If you really want to run distributed in that setup modify my skip and limit calculator or roll your own based off --skip --limit and --keyspace or setup a symlink so all the hashtopus nodes have all the wordlists already mounted via nfs to the files directory to reduce redownloading the lists
(11-27-2017, 04:40 PM)evilmog Wrote: [ -> ]
(02-11-2016, 07:26 AM)mamexp Wrote: [ -> ]I'm looking to test a rig with 4-6 GPUs and i'm wondering if i can use USB persistent Linux build as OS.

Does Hashcat use alot of IOs from the local OS?

Can i offset the storage data such as dictionary lists to an iSCSI/AoE server? Would that be a bottle neck due to the slower speed than a local SSD?

I'm also reading through a distributed setup called Hashtopus, i'm not clear about the storage for an agent also. Dont they get to share the same data from a SAN? or they each have a copy of the data to work on?

I'm sorry if these questions have been answered. I tried to use the search function on the forum with no success.

Best regards,

I have successfully done this close to 4 years ago, it wasn't pretty but it worked.

You will need to not use --remove, specify a potfile either on ramdisk or on high speed NFS/SAN, and loading wordlists will be your biggest problem, but you can run off a USB or even PXE boot and do all the storage off network or ramdisk, using rules as amplifiers or running masks becomes your friend in this scenario.

10 Gbit networking or infiniband takes a lot of the sting out though.

If you are running hashtopus or hashtopussy it downloads all its wordlists and puts its potfile and zapfiles on the directory you download it to, if you were running that I'd put your files, hashes and zaps directory on extremely high speed NFS, or just get a local caching hard drive.  If you really want to run distributed in that setup modify my skip and limit calculator or roll your own based off --skip --limit and --keyspace or setup a symlink so all the hashtopus nodes have all the wordlists already mounted via nfs to the files directory to reduce redownloading the lists

evilmog, that was just an epic post.  Reminds me of the BBS era pre internet.