hashcat - Is there a limit on wordlist file size?
#1
So, I'm using hashcat-0.39.

I was wondering if there was a file size limit on the wordlist.

For example, I have a wordlist of 10GB. Is that too large?

I mean, it worked, but I was wondering.... does it, perhaps, cut off part of the way through due to any limits, or not?
#2
No, there is no filesize limit. Dictionaries are splitted into segments and then only the chunks are loaded, not the whole dictionary at once.
#3
(06-11-2012, 09:39 AM)atom Wrote: No, there is no filesize limit. Dictionaries are splitted into segments and then only the chunks are loaded, not the whole dictionary at once.

By the way, which segment size do you recommend for machines with SSD and 8GB ram?
#4
I just leave mine as default. Just experiment and see if your speed changes.

If you want to slightly improve performance / speed you could sort your lists into separate ones by word length.
#5
I am experiencing a large lag while oclHashcat is performing disk write operations with default settings. I will check other configs with trial and error method, thanks. Smile
#6
(06-11-2012, 02:04 PM)fizikalac Wrote: I am experiencing a large lag while oclHashcat is performing disk write operations with default settings. I will check other configs with trial and error method, thanks. Smile

Do you mean when you first start oclhashcat ? If so that might be the filtering atom does before it starts up. You can avoid that by using stdin but atom made that filter for a reason ! Big Grin

If you are meaning something more technical than that I'm sorry, I wouldn't know. Sad

You can actually improve performance by using rules. This doesn't need such a fast supply from your hard drive as more is done on the GPU.

If that last sentence doesn't mean anything to you then let me know and I will try to explain it..... well I will try ! Big Grin
#7
I have the same problem as this guy: http://erratasec.blogspot.com/2012/06/li...cking.html

Quote:Use --remove. Oh, I found the source of the problem: I was using original file on the faster processor, which is twice as large as the one cut down removed the "zeroed" hashes. Memory lookups on GPUs are slower with the larger amount of memory. Thus, shrinking the file makes a big difference in speed. I wonder if splitting the file into small chunks that fit better within the GPU cache might work better.

I get it, yes, but when you crack large amount of hashes like LinkedIn leak, you can experience such lag... Using more rules could help, yes, I will try that.
#8
(06-11-2012, 02:18 PM)fizikalac Wrote: I have the same problem as this guy: http://erratasec.blogspot.com/2012/06/li...cking.html

Quote:Use --remove. Oh, I found the source of the problem: I was using original file on the faster processor, which is twice as large as the one cut down removed the "zeroed" hashes. Memory lookups on GPUs are slower with the larger amount of memory. Thus, shrinking the file makes a big difference in speed. I wonder if splitting the file into small chunks that fit better within the GPU cache might work better.

I get it, yes, but when you crack large amount of hashes like LinkedIn leak, you can experience such lag... Using more rules could help, yes, I will try that.

Thank you for the link, you have answered your own question ! Big Grin

Are you working on the linkedin list ? So am I, I'll PM you.
#9
the biggest i ever run was 14gbytes ... waste of space but it worked , so hashcat gona eat all the big stuff ;-)
#10
(06-11-2012, 03:45 PM)ati6990 Wrote: the biggest i ever run was 14gbytes

Makes my lists look tiny Sad ... Smile