Posts: 127
	Threads: 16
	Joined: Sep 2011
	
	
 
	
		
		
		07-03-2012, 02:05 AM 
(This post was last modified: 07-03-2012, 02:14 AM by chort.)
		
	 
	
		On -plus-0.09 the examples run fine, but when I try to load a 103MB sha1 hash file (2.6M hashes) I get clCreateBuffer() -61 no matter if I use rules, -n, --gpu-loops, etc.
I can run the exact same arguments on -plus-0.08 and it works fine.
	
	
	
	
	
 
 
	
	
	
		
	Posts: 5,232
	Threads: 233
	Joined: Apr 2010
	
	
 
	
	
		Yeah, oclHashcat-plus 0.09 requires more RAM because of SHA512.
	
	
	
	
	
 
 
	
	
	
		
	Posts: 723
	Threads: 85
	Joined: Apr 2011
	
	
 
	
	
		Ahh, I have experienced this !
So thats why !  ha ha !  Can I ask a really n00b question please, someone has to !!
Do you mean RAM on GPU or RAM as normal computer RAM ?
Thanks.
	
	
	
	
	
 
 
	
	
	
		
	Posts: 47
	Threads: 2
	Joined: Dec 2010
	
	
 
 
	
	
	
		
	Posts: 723
	Threads: 85
	Joined: Apr 2011
	
	
 
	
	
		 (07-03-2012, 01:28 PM)gat3way Wrote:  GPU memory.
Thank you !
I am glad no one else noticed I asked that n00b question 
 
	 
	
	
	
	
 
 
	
	
	
		
	Posts: 127
	Threads: 16
	Joined: Sep 2011
	
	
 
	
	
		Hmm, so basically the only way around this is to pre-split hash files, right? There's no way oclHashcat can parse the -m argument prior to allocating memory (has to do with the way the kernels are built?)
In that case, any thoughts of adding auto-splitting of hash files? One way would be to create a temp dir and split the hash file into it. Maybe it would be possible to read the file in chunks and just handle that in memory?
	
	
	
	
	
 
 
	
	
	
		
	Posts: 5,232
	Threads: 233
	Joined: Apr 2010
	
	
 
	
	
		it would be easier to build hash-type specific memory segments than to load chunks, but that is something with not so high priority yet on my todo list