Brain server can't write above 4GB
#11
Yeah this seems to be exactly the case. I also did some further checks.

What is weird is that this is neither documented, nor mentioned a lot.

I also tested that the type of the passed numbers (size_t) can't really be the culprit, it's still way too small and furthermore size_t is unsigned numbers and therefore even "double the size".

I've come up with this diff that works for me perfectly fine:
Code:
diff --git a/src/filehandling.c b/src/filehandling.c
index 22a9e2aa..86a448f1 100644
--- a/src/filehandling.c
+++ b/src/filehandling.c
@@ -72,7 +72,7 @@ bool hc_fopen (HCFILE *fp, const char *path, char *mode)
   {
     lseek (fd_tmp, 0, SEEK_SET);

-    if (read (fd_tmp, check, sizeof(check)) > 0)
+    if (read (fd_tmp, check, sizeof (check)) > 0)
     {
       if (check[0] == 0x1f && check[1] == 0x8b && check[2] == 0x08 && check[3] == 0x08) fp->is_gzip = true;
       if (check[0] == 0x50 && check[1] == 0x4b && check[2] == 0x03 && check[3] == 0x04) fp->is_zip = true;
@@ -131,7 +131,47 @@ size_t hc_fread (void *ptr, size_t size, size_t nmemb, HCFILE *fp)
   }
   else
   {
+    #if defined (_WIN)
+
+    // 4 GB fread () limit for windows systems ?
+    // see: https://social.msdn.microsoft.com/Forums/vstudio/en-US/7c913001-227e-439b-bf07-54369ba07994/fwrite-issues-with-large-data-write
+
+    #define GIGABYTE (1024u * 1024u * 1024u)
+
+    if (((size * nmemb) / GIGABYTE) < 4)
+    {
+      n = fread (ptr, size, nmemb, fp->pfp);
+    }
+    else
+    {
+      if ((size / GIGABYTE) > 3) return -1;
+
+      size_t elements_max  = (3u * GIGABYTE) / size;
+      size_t elements_left = nmemb;
+
+      size_t off = 0;
+
+      n = 0;
+
+      while (elements_left > 0)
+      {
+        size_t elements_cur = elements_max;
+
+        if (elements_left < elements_max) elements_cur = elements_left;
+
+        size_t ret = fread (ptr + off, size, elements_cur, fp->pfp);
+
+        if (ret != elements_cur) return -1;
+
+        n   += ret;
+        off += ret * size;
+
+        elements_left -= ret;
+      }
+    }
+    #else
     n = fread (ptr, size, nmemb, fp->pfp);
+    #endif
   }

   return n;
@@ -152,7 +192,47 @@ size_t hc_fwrite (const void *ptr, size_t size, size_t nmemb, HCFILE *fp)
   }
   else
   {
+    #if defined (_WIN)
+
+    // 4 GB fwrite () limit for windows systems ?
+    // see: https://social.msdn.microsoft.com/Forums/vstudio/en-US/7c913001-227e-439b-bf07-54369ba07994/fwrite-issues-with-large-data-write
+
+    #define GIGABYTE (1024u * 1024u * 1024u)
+
+    if (((size * nmemb) / GIGABYTE) < 4)
+    {
+      n = fwrite (ptr, size, nmemb, fp->pfp);
+    }
+    else
+    {
+      if ((size / GIGABYTE) > 3) return -1;
+
+      size_t elements_max  = (3u * GIGABYTE) / size;
+      size_t elements_left = nmemb;
+
+      size_t off = 0;
+
+      n = 0;
+
+      while (elements_left > 0)
+      {
+        size_t elements_cur = elements_max;
+
+        if (elements_left < elements_max) elements_cur = elements_left;
+
+        size_t ret = fwrite (ptr + off, size, elements_cur, fp->pfp);
+
+        if (ret != elements_cur) return -1;
+
+        n   += ret;
+        off += ret * size;
+
+        elements_left -= ret;
+      }
+    }
+    #else
     n = fwrite (ptr, size, nmemb, fp->pfp);
+    #endif
   }

   if (n != nmemb) return -1;

only if the operating system windows (the exe executable) is used, the fread () and fwrite () looping is performed IF and ONLY if the size written is larger than 4 GB.

I also tested to restart the server and to re-run the same attack and the backups are loaded correctly and the server detects all previous work correctly on my windows 10 test system.

Do you have any means to test this diff ? compile the modified source code with the instructions as mentioned on github (BUILD_MSYS2.md) after running git apply this_patch_file.diff
Reply
#12
(05-31-2020, 09:15 PM)philsmd Wrote: Do you have any means to test this diff ? compile the modified source code with the instructions as mentioned on github (BUILD_MSYS2.md) after running git apply this_patch_file.diff

I might be out of my depth, but I will give it a try tomorrow morning
Reply
#13
it's actually not a problem if you can't manage to test this patch... it would just make sense to test it further before we include it into the beta versions.

I will probably do some more tests and if everything seems correct I will create a pull request (PR) on github (https://github.com/hashcat/hashcat). After the changes are merged a beta will be available at https://hashcat.net/beta/ (not yet merged of course)



update: I've opened a pull request on github here: https://github.com/hashcat/hashcat/pull/2427 , let's see if it gets accepted and merged (still no beta available)



update 2: new beta is available that includes the fix: https://hashcat.net/beta/ . Please test. thx
Reply
#14
Read and write was successful on a 7+GB ldmp.
Created a new database from scratch and wrote 4+GB to it.

Test from my POV was successful.

Thanks a lot for the quick response and turn-around on the fix.
Reply