Brain server can't write above 4GB - Printable Version +- hashcat Forum (https://hashcat.net/forum) +-- Forum: Support (https://hashcat.net/forum/forum-3.html) +--- Forum: hashcat (https://hashcat.net/forum/forum-45.html) +--- Thread: Brain server can't write above 4GB (/thread-9269.html) Pages:
1
2
|
Brain server can't write above 4GB - illyria - 05-29-2020 OS: Win10 Hashcat: found on beta 1774, tested and verified on beta 1807 Brainserver Code: hc --brain-server Client Code: hc -m 0 -a 0 -O -z --brain-client-features=1 --brain-password=0000000000000000 0123456789abcdef0123456789abcdef dict.dic -r rules\dive.rule Quote:1590552137.126935 | 67.14s | 0 | Wrote 4274278992 bytes from session 0x54d586c0 in 3950.13 ms At this point nothing more happens, no errors. But the brain.54d586c0.ldmp file is now only 2.5MB It seems to me that first time it tries to write to the ldmp file where write-size is above 4GB, it writes only the part above 4GB and the brain server gets stuck. At this point the brain-server can only be killed from task manager. RE: Brain server can't write above 4GB - philsmd - 05-29-2020 do you say that the release version is not affected or do you just mention the beta because you emphasize that it's a problem that isn't fixed ? what are your system specs ? enough RAM / disk space etc ? RE: Brain server can't write above 4GB - illyria - 05-29-2020 I haven't tested on anything other than 1774 and 1807. But will run the test on the release version asap Specs: Ryzen 3700X, 2080Ti 64GB ram, 800GB free disk space 64bit Windows, NTFS file system Towards the end of the test Hashcat server uses about 11GB RAM, and my system still has 40GB to spare RE: Brain server can't write above 4GB - illyria - 05-30-2020 Finally got done testing on 5.1.0, and exactly the same thing happens. Only the part above 4GB is written to file. RE: Brain server can't write above 4GB - philsmd - 05-30-2020 I guess a developer or at least somebody able to debug the src/brain.c code would need to try to troubleshoot, repdroduce and try to fix this problem. Are you sure the disk isn't busy writting ? Why should it be stuck ? Is it not reacting/writing anymore after that last write operation ? Maybe somebody could also try to debug it or use some tools to see what the hashcat server tries to do and see if the disk is busy (for instance with tools like this on windows: https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer) It would also be great if somebody double-checks with linux to see if this is a windows-only problem. Could you try to test that ? BTW: I know that you are just testing something and trying to proof that something is not working perfectly fine... but I just want to emphasize for the random readers here that the brain feature was never designed to be used with fast hashes that could generate GB of data very fast... it's much better suited for hash types like bcrypt/scrypt (I know you are aware of this, but other users might not understand this immediately) RE: Brain server can't write above 4GB - illyria - 05-30-2020 (05-30-2020, 05:28 PM)philsmd Wrote: I guess a developer or at least somebody able to debug the src/brain.c code would need to try to troubleshoot, repdroduce and try to fix this problem. I had it sitting for almost 12 hours before I noticed it wasn't running. And also could see that the filesize of the ldmp updated from almost 4GB to a few MBs and stayed at that size. (05-30-2020, 05:28 PM)philsmd Wrote: Why should it be stuck ? Is it not reacting/writing anymore after that last write operation ? It's doing something. It's taking up one full core on my CPU running, but there is no disk activity. Ctrl-C does tell me "Brain server stopping", but it never exits. And CPU activity stays at 100% on one core. I guess you could say it never completes the write command, as the on-screen log never makes note of it (05-30-2020, 05:28 PM)philsmd Wrote: [...] I have a linux box. I will see if I can figure out to get the brain server running on that. If someone else has a working setup and the time/inclination to give a test, it would be appreciated. (05-30-2020, 05:28 PM)philsmd Wrote: BTW: I know that you are just testing something and trying to proof that something is not working perfectly fine... but I just want to emphasize for the random readers here that the brain feature was never designed to be used with fast hashes that could generate GB of data very fast... it's much better suited for hash types like bcrypt/scrypt (I know you are aware of this, but other users might not understand this immediately) I see what you are saying and will note that I use it only for slower hashes and/or large lists of salted hashes. The -m 0 was just to speed things up when trying to reach 4GB file size of the database. Which reminds me, 5.1.0 took about 6 hours to reach that goal. The beta versions got there in less than 20 minutes, so future testing on my part will be done on the beta versions RE: Brain server can't write above 4GB - illyria - 05-30-2020 Running the brain-server on Ubuntu 18.04, hashcat beta 1807 there is no failure when the ldmp file passes the 4GB mark, so it appears to be a windows-only issue. Quote:1590868857.076713 | 0.57s | 2 | L | 2490.98 ms | Long: 603860975, Inc: 2228224, New: 1685930 RE: Brain server can't write above 4GB - philsmd - 05-31-2020 very good test. somehow I had a feeling that this could be the case, because these things were stress tested already (but as we can guess now, it was mainly tested on linux systems). The root cause can be a lot of things including mingw-fwrite() / windows os specific problems or file system limitations, but as far as I understood NTFS does not have any such limit (at least not that small, it's in the terabytes). this is already a good approach to narrow down the causes of this issue (which now seems windows specific), but I guess this needs to be debugged with some minor source code changes in src/brain.c to see where hashcat is stuck and why it's not continuing to write or accept client requests. RE: Brain server can't write above 4GB - philsmd - 05-31-2020 I've now tested with a win10 machine with 8GB RAM (I know it probably should be more for a good test), but the results are similar to yours, my changes: Code: diff --git a/src/brain.c b/src/brain.c i.e. I print the unix timestamp when the backup starts and a "done" when the fwrite () is done. My win10 results (stuck even after waiting more than 15 minutes... all the previous writes just took a couple of seconds, but of course increasing over time): This was before the 4GB were reached, the "fwrite 1 started xxx" just mentiones a time when fwrite () was started (it's not a file size or similar, just a timestamp). and this happens afterwards (stuck): not sure what we can do now... but maybe test if something is wrong with fwrite () / hc_fwrite () or if there are some limitations. maybe a standalone file that just tries to write THAT much data would be enough to troubleshoot it. it could also be that MINGW adds some further restriction. I think that there isn't any obvious hashcat bug/problem here... but maybe some OS/win specific cause of this strange problem. RE: Brain server can't write above 4GB - illyria - 05-31-2020 Did a little digging, and I may be way off base - seeing as I don't really code much, and have never done anything in C/C++ But this thread on MSDN suggests that on Windows fwrite cannot write out more than 4GB - MSDN - and if attempted exhibits exactly the behaviour I (we) are seeing here. Quote:So no matter what the size is if it's above 4GB (i've tried to write 10GB originally) it writes out whatever is above closest multiple of 4GB (in the case of 10GB - it writes what is above 8GB hence 2GB) then it gets count as something in multiple of 4GB units (say 8GB) and does |