10-11-2019, 05:27 PM
How do I or is it necessary to migrate the saved brain server data to newer versions as they come out?
Migrating Brain Server Data
|
10-11-2019, 05:27 PM
How do I or is it necessary to migrate the saved brain server data to newer versions as they come out?
10-12-2019, 08:44 AM
Fair question. So far, I don't think that any interface-breaking changes have happened, so no migration steps have been necessary.
If/when it becomes necessary, the migration will vary, based on what the nature of the changes are (not yet known). I'm honestly not sure if there is any version/format information in the brain state data, to make such transitions easy in the future.
~
it will probably make some checks (in the future if really incompatible data was found, but of course the goal is to make it downwards compatible for sure) based on the version number see src/brain.c and this:
https://github.com/hashcat/hashcat/blob/....h#L51-L52
10-15-2019, 03:46 PM
Thanks for the replies. Where is the brain data actually stored?
10-15-2019, 04:31 PM
in the current directory from which you start the brain server:
1. brain.$session.ldmp for the hashed passwords dumps (hash database) 2. brain.$session.admp for the attack positions dumps (attack database) see Code: hashcat --help | grep brain
10-15-2019, 04:57 PM
Please correct me if I am wrong. I don't have any .ldmp files in my directory, only .admp. I assume this is because I didn't specify any client features, therefore it only stored attack postitions.
One of my main goals for using brain was to eliminate duplicate rule calculations. Since I didn't specify the -3 feature, I also assume it wasn't eliminating duplicates. Correct?
10-15-2019, 05:14 PM
the files are only stored on the brain server (not the clients).
there is a default --brain-client-features, namely --brain-client-features 2 . that means that if you do not specify any, it defaults to 2. yeah, in some cases it makes more sense to use --brain-client-features 3, but there is a disadvantage of this... the server will be "slower" and need to process and store more data. it all depends on a couple of factors... for instance which hash type do you use ? btw: there could be some expection to the cwd (current working directory) path... i.e. if you "install" hashcat with "make install" or with some linux distributed hashcat packages the default path will probably be in $HOME/.hashcat/ or similar
10-15-2019, 05:22 PM
In my case the client and the server are the same PC. I am cracking 16800 hashes. I am getting around 400K h/s, so I can run through a dictionary file pretty quick, but when I add a rule to the dictionary I was trying to eliminate duplicate calculations to reduce the time.
Am I wasting my time? |
« Next Oldest | Next Newest »
|