I've now tested with a win10 machine with 8GB RAM (I know it probably should be more for a good test), but the results are similar to yours, my changes:
i.e. I print the unix timestamp when the backup starts and a "done" when the fwrite () is done.
My win10 results (stuck even after waiting more than 15 minutes... all the previous writes just took a couple of seconds, but of course increasing over time):
![[Image: 1.jpg]](https://i.ibb.co/mzXwgCL/1.jpg)
This was before the 4GB were reached, the "fwrite 1 started xxx" just mentiones a time when fwrite () was started (it's not a file size or similar, just a timestamp).
and this happens afterwards (stuck):
![[Image: 2-1.jpg]](https://i.ibb.co/BTHNMj9/2-1.jpg)
![[Image: 2-2.jpg]](https://i.ibb.co/CVDPtJ7/2-2.jpg)
![[Image: 2-3.jpg]](https://i.ibb.co/BZcRKd5/2-3.jpg)
![[Image: 2-4.jpg]](https://i.ibb.co/PzCjF1r/2-4.jpg)
![[Image: 2-5.jpg]](https://i.ibb.co/TK9YT8m/2-5.jpg)
not sure what we can do now... but maybe test if something is wrong with fwrite () / hc_fwrite () or if there are some limitations. maybe a standalone file that just tries to write THAT much data would be enough to troubleshoot it. it could also be that MINGW adds some further restriction.
I think that there isn't any obvious hashcat bug/problem here... but maybe some OS/win specific cause of this strange problem.
Code:
diff --git a/src/brain.c b/src/brain.c
index b39c4f16..88593cf2 100644
--- a/src/brain.c
+++ b/src/brain.c
@@ -1644,7 +1644,9 @@ bool brain_server_write_hash_dump (brain_server_db_hash_t *brain_server_db_hash,
return false;
}
+ printf ("\n\n\nfwrite 1 started %lu\n", time (NULL));
const size_t nwrite = hc_fwrite (brain_server_db_hash->long_buf, sizeof (brain_server_hash_long_t), brain_server_db_hash->long_cnt, &fp);
+ printf ("fwrite 1 done\n\n\n");
if (nwrite != (size_t) brain_server_db_hash->long_cnt)
{
@@ -1843,7 +1845,9 @@ bool brain_server_write_attack_dump (brain_server_db_attack_t *brain_server_db_a
// storing should not include reserved attacks only finished
+ printf ("\n\n\nfwrite 2 started %lu\n", time (NULL));
const size_t nwrite = hc_fwrite (brain_server_db_attack->long_buf, sizeof (brain_server_attack_long_t), brain_server_db_attack->long_cnt, &fp);
+ printf ("fwrite 2 done\n\n\n");
if (nwrite != (size_t) brain_server_db_attack->long_cnt)
{
i.e. I print the unix timestamp when the backup starts and a "done" when the fwrite () is done.
My win10 results (stuck even after waiting more than 15 minutes... all the previous writes just took a couple of seconds, but of course increasing over time):
![[Image: 1.jpg]](https://i.ibb.co/mzXwgCL/1.jpg)
This was before the 4GB were reached, the "fwrite 1 started xxx" just mentiones a time when fwrite () was started (it's not a file size or similar, just a timestamp).
and this happens afterwards (stuck):
![[Image: 2-1.jpg]](https://i.ibb.co/BTHNMj9/2-1.jpg)
![[Image: 2-2.jpg]](https://i.ibb.co/CVDPtJ7/2-2.jpg)
![[Image: 2-3.jpg]](https://i.ibb.co/BZcRKd5/2-3.jpg)
![[Image: 2-4.jpg]](https://i.ibb.co/PzCjF1r/2-4.jpg)
![[Image: 2-5.jpg]](https://i.ibb.co/TK9YT8m/2-5.jpg)
not sure what we can do now... but maybe test if something is wrong with fwrite () / hc_fwrite () or if there are some limitations. maybe a standalone file that just tries to write THAT much data would be enough to troubleshoot it. it could also be that MINGW adds some further restriction.
I think that there isn't any obvious hashcat bug/problem here... but maybe some OS/win specific cause of this strange problem.