Question about --force option and CUDA Toolkit SDK
Hi all on the community!
First i want to say thanks to hashcat team for this great tool!

This si my first post, I usually try to solve the problems by my own but at this point i think i need a bit of help!

I'm a hashcat user since not long time time and i have been reading the forum regarding some things i'm interested on.
I have successfully developed a kernel and i have used it against my own made hashes and it is working good (thanks to the new developer guide which ponit me on right direction after some days of tests and frustration).

With hashcat 5.1.0 i have been using the --force switch because HC is showing me the following error:

* Device #X: Outdated or broken NVIDIA driver '442.23' detected!

I have read that using --force option is not good but i don't understand exactly why (it is because we are forcing hashcat to disable some important warnings?). I have tested my kernel against some hahes (allways using hashcat 5.1.0 and --force switch) and i know that is working properly so, there is any issue using the --force option?

Just to clarify, the mentioned error is happening with all the modes not only with the kernel i have created.

As you can see, my nvidia driver version is 442.23.

I have also tried my kernel with the new HC 6.0.0 and it is working fine and no need to use the --force option.

As my system is running on windows 10 and my nvidia driver is up to date (i have the latest one released by the manufacturer), i don't want to try installing a new driver and break it.

So, my first question is: if i continue using HC 5.1.0 should i have to fix the driver issue or can i continue running it
with the --force switch without any harm?

I have read the wrong driver guide but i'm not sure what to do because, as i have previously said, i have the latest driver and i'm not sure that installing another one can fix the problem or cause another worst one...

My second question is about the CUDA TOOLKIT SDK.

I have installed some CUDA toolkit versions to test with (version 10.0, 10.2 and the latest one 11.0 RC) and i can't make HC to work with the SDK (i have always only install the sdk and tools, never the driver).

I have tried using HC 5.1.0 and also the latest 6.0.0 sources but nothing, i'm allways receiving the message:

* Device #X: CUDA SDK Toolkit installation NOT detected.
                   CUDA SDK Toolkit installation required for proper device support and utilization
                   Falling back to OpenCL Runtime

I would like to test my kernel using CUDA instead OpenCL.

On my debugs (on hashcat 6.0.0) i have seen that the hc_dlopen() function is not able to load the nvrtc.dll on nvrtc_init() nither to load the nvcuda.dll on the cuda_init() function.
To be precise the nvrtc.dll in my case is nvrtc64_%d%d_0.dll (and the name depends on CUDA version, for instance for version 10.0 the name is nvrtc64_100_0.dll0.
To be sure I have tried to point straight to the dll's on my system but the hc_dlopen() is not able to load any of them.

I have compiled the sources using Cygwin64 and ran it under vscode to do debug it.

On the other hand I have ran the "deviceQuery" program example (CUDA SDK sample app) and it is working fine so, at first sight it seems that there is no issue with the sdk (also I have tested another cuda toolkit example programs and all of them are working fine).

Does any one have some idea which could be the issue and how to solve it?
Has anyone had this kind of error under windows and was able to fix it?

Many thanks and BR.
did you try to follow these steps: ?

You are also wrong about the NVIDIA version numbers, just have a look at the page e.g. here

At the moment of this writing, the driver is already more than version 451 (so 442 is very outdated, several months old indeed). Just download it from the vendors page if your operating system is not able to download the latest one automatically.
Thanks for your promt reply Phill

Yes, I had readed the /faq/wrongdriver page but I wasn't sure if I should install the driver because I had read on manufacturer forum that it could cause problems when updating it on my notebook but today i have finally updated it, and now i have the latest version installed on my system and it is working with no issues (at least upto the moment)

After doing that, i have cloned the latest repository and compiled the sources under cygwin64 and have run HC but, unfortunatelly, i'm still having the same issue with the CUDA SDK not detected message...
For the record now i have installed the latest CUDA Toolkit SDK 11.0 RC.

Another test i did was download Hashcat beta version and run the binary and for my surprise it has correctly detected the CUDA SDK!!!.

It is possible than the beta version has some change/improvements regarding the backends?

I have also tested copying my module dll file on the module folder and the opencl kernel on the kernel folder of HC beta and have tried to run a benchmark of my kernel to test it, but something is not working and HC is finishing after the message about the hash mode is displayed on the screen.

It could be that there is some difference on hashcat beta regarding the one on the repository? or it is something related to the build on my system?

Any idea/suggestions?

Hi again!

Finally i was able to make hashcat to work with the CUDA SDK !!

The problem was related with the nvrtc.dll which was not loading and thus the if ((rc_cuda_init == 0) && (rc_nvrtc_init == 0)) on backend_ctx_init() was not executing!

OBS: The cuda_init() function is working fine now and nvcuda.dll is properly loaded after the driver was updated.

The dll was not loaded because on:

#elif defined (__CYGWIN__)
  nvrtc->lib = hc_dlopen ("nvrtc.dll");

an incorrect name was being used
So just change the wrong name to the one of the dll which is on my system

#elif defined (__CYGWIN__)
  //nvrtc->lib = hc_dlopen ("nvrtc.dll");
  nvrtc->lib = hc_dlopen ("nvrtc64_110_0.dll");

and after rebuild the application the issue was fixed!!

Of course the modification must be done in a way that it work properly on any CUDA Toolkit SDks and to do that we can use the same code that the one used on:

#if defined (_WIN)

where it is used a logic to try the different posibles names of the dll or it could be possible to remove the elif defined (__CYGWIN__) branch and just use.

#if defined (_WIN) || defined (__CYGWIN__) instead of #if defined (_WIN)

which is what i finally did on the source.

To sumarize:

1) I have installed the latest NVIDIA driver 451.48
2) I have installed the latest CUDA Toolkit SDK (11.0 RC)
3) I have modified the backend.c source file to use the correct nvrtc DLL name on my system

and finally now it is working!

Just for the record with CUDA there is a 9,01% of increase on the speed of my kernel regarding OpenCL backend.

Many thanks

Your change doesn't really make sense to me, because hashcat tries to find and load the library even with the nvrtc64_*_0.dll file name:

maybe your testing/debugging is kind of done wrongly or flawed... you would need to debug that major/minor loop and revert your changes before testing what libraries the inner loop tries to load. Thx

oh, maybe now I got it.... we probably need to do the same loop when using cygwin (not only if windows/cmd was detected)... that could indeed be the case because the library files on the system across cygwin and win will be the same.
therefore, instead of only
#if   defined (_WIN)

we would need to use:
#if defined (_WIN) || defined (__CYGWIN__)

and remove the other branch:
#elif defined (__CYGWIN__)
below the branch/loop.
Could you test this?
Yes, in fact that was what i did.
The first test was with the dll name just to verify it was working.

At the end i have removed the branch

#elif defined (__CYGWIN__)

and have added the condition: || defined (__CYGWIN__)

on the #if defined (_WIN)

so the same loop is used also for cygwin and for me it is working fine.

Thanks and BR!