Posts: 1
Threads: 1
Joined: Jun 2012
So, I'm using hashcat-0.39.
I was wondering if there was a file size limit on the wordlist.
For example, I have a wordlist of 10GB. Is that too large?
I mean, it worked, but I was wondering.... does it, perhaps, cut off part of the way through due to any limits, or not?
Posts: 5,185
Threads: 230
Joined: Apr 2010
No, there is no filesize limit. Dictionaries are splitted into segments and then only the chunks are loaded, not the whole dictionary at once.
Posts: 21
Threads: 5
Joined: Jun 2012
(06-11-2012, 09:39 AM)atom Wrote: No, there is no filesize limit. Dictionaries are splitted into segments and then only the chunks are loaded, not the whole dictionary at once.
By the way, which segment size do you recommend for machines with SSD and 8GB ram?
Posts: 723
Threads: 85
Joined: Apr 2011
I just leave mine as default. Just experiment and see if your speed changes.
If you want to slightly improve performance / speed you could sort your lists into separate ones by word length.
Posts: 21
Threads: 5
Joined: Jun 2012
I am experiencing a large lag while oclHashcat is performing disk write operations with default settings. I will check other configs with trial and error method, thanks.
Posts: 723
Threads: 85
Joined: Apr 2011
(06-11-2012, 02:04 PM)fizikalac Wrote: I am experiencing a large lag while oclHashcat is performing disk write operations with default settings. I will check other configs with trial and error method, thanks.
Do you mean when you first start oclhashcat ? If so that might be the filtering atom does before it starts up. You can avoid that by using stdin but atom made that filter for a reason !
If you are meaning something more technical than that I'm sorry, I wouldn't know.
You can actually improve performance by using rules. This doesn't need such a fast supply from your hard drive as more is done on the GPU.
If that last sentence doesn't mean anything to you then let me know and I will try to explain it..... well I will try !
Posts: 21
Threads: 5
Joined: Jun 2012
06-11-2012, 02:18 PM
(This post was last modified: 06-11-2012, 02:19 PM by fizikalac.)
I have the same problem as this guy:
http://erratasec.blogspot.com/2012/06/li...cking.html
Quote:Use --remove. Oh, I found the source of the problem: I was using original file on the faster processor, which is twice as large as the one cut down removed the "zeroed" hashes. Memory lookups on GPUs are slower with the larger amount of memory. Thus, shrinking the file makes a big difference in speed. I wonder if splitting the file into small chunks that fit better within the GPU cache might work better.
I get it, yes, but when you crack large amount of hashes like LinkedIn leak, you can experience such lag... Using more rules could help, yes, I will try that.
Posts: 723
Threads: 85
Joined: Apr 2011
(06-11-2012, 02:18 PM)fizikalac Wrote: I have the same problem as this guy: http://erratasec.blogspot.com/2012/06/li...cking.html
Quote:Use --remove. Oh, I found the source of the problem: I was using original file on the faster processor, which is twice as large as the one cut down removed the "zeroed" hashes. Memory lookups on GPUs are slower with the larger amount of memory. Thus, shrinking the file makes a big difference in speed. I wonder if splitting the file into small chunks that fit better within the GPU cache might work better.
I get it, yes, but when you crack large amount of hashes like LinkedIn leak, you can experience such lag... Using more rules could help, yes, I will try that.
Thank you for the link, you have answered your own question !
Are you working on the linkedin list ? So am I, I'll PM you.
Posts: 117
Threads: 6
Joined: Aug 2011
the biggest i ever run was 14gbytes ... waste of space but it worked , so hashcat gona eat all the big stuff ;-)
Posts: 723
Threads: 85
Joined: Apr 2011
(06-11-2012, 03:45 PM)ati6990 Wrote: the biggest i ever run was 14gbytes
Makes my lists look tiny
...