Migrating Brain Server Data
#1
How do I or is it necessary to migrate the saved brain server data to newer versions as they come out?
Reply
#2
Fair question. So far, I don't think that any interface-breaking changes have happened, so no migration steps have been necessary.

If/when it becomes necessary, the migration will vary, based on what the nature of the changes are (not yet known).

I'm honestly not sure if there is any version/format information in the brain state data, to make such transitions easy in the future.
~
Reply
#3
it will probably make some checks (in the future if really incompatible data was found, but of course the goal is to make it downwards compatible for sure) based on the version number see src/brain.c and this:
https://github.com/hashcat/hashcat/blob/....h#L51-L52
Reply
#4
Thanks for the replies. Where is the brain data actually stored?
Reply
#5
in the current directory from which you start the brain server:
1. brain.$session.ldmp for the hashed passwords dumps (hash database)
2. brain.$session.admp for the attack positions dumps (attack database)

see
Code:
hashcat --help | grep brain
to find out all available options for session and client features (difference between --brain-client-features 1 or 2 or 3)
Reply
#6
Please correct me if I am wrong. I don't have any .ldmp files in my directory, only .admp. I assume this is because I didn't specify any client features, therefore it only stored attack postitions.

One of my main goals for using brain was to eliminate duplicate rule calculations. Since I didn't specify the -3 feature, I also assume it wasn't eliminating duplicates. Correct?
Reply
#7
the files are only stored on the brain server (not the clients).

there is a default --brain-client-features, namely --brain-client-features 2 . that means that if you do not specify any, it defaults to 2.

yeah, in some cases it makes more sense to use --brain-client-features 3, but there is a disadvantage of this... the server will be "slower" and need to process and store more data.

it all depends on a couple of factors... for instance which hash type do you use ?

btw: there could be some expection to the cwd (current working directory) path... i.e. if you "install" hashcat with "make install" or with some linux distributed hashcat packages the default path will probably be in $HOME/.hashcat/ or similar
Reply
#8
In my case the client and the server are the same PC. I am cracking 16800 hashes. I am getting around 400K h/s, so I can run through a dictionary file pretty quick, but when I add a rule to the dictionary I was trying to eliminate duplicate calculations to reduce the time.

Am I wasting my time?
Reply
#9
Is there a way to clean-up brain sessions for which the passwords are already found? Or hashcat brain server automatically skips those?
Reply