hashcat Forum
clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE - Printable Version

+- hashcat Forum (https://hashcat.net/forum)
+-- Forum: Support (https://hashcat.net/forum/forum-3.html)
+--- Forum: hashcat (https://hashcat.net/forum/forum-45.html)
+--- Thread: clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE (/thread-8078.html)



clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE - tecxx - 01-18-2019

Hello,
running hashcat in The-Distribution-Which-Does-Not-Handle-OpenCL-Well (Kali) linux AWS EC2 instance (P3.x8 with 4x Nvidia Tesla)

trying to crack SHA1 hashes, small files for testing work fine.
when i try to crack a SHA1 file with over 20 GB in size, the result is

Code:
hashcat (v5.1.0) starting...

* Device #5: Not a native Intel OpenCL runtime. Expect massive speed loss.
            You can use --force to override, but do not report related errors.
nvmlDeviceGetFanSpeed(): Not Supported

nvmlDeviceGetFanSpeed(): Not Supported

nvmlDeviceGetFanSpeed(): Not Supported

nvmlDeviceGetFanSpeed(): Not Supported

OpenCL Platform #1: NVIDIA Corporation
======================================
* Device #1: Tesla V100-SXM2-16GB, 4040/16160 MB allocatable, 80MCU
* Device #2: Tesla V100-SXM2-16GB, 4040/16160 MB allocatable, 80MCU
* Device #3: Tesla V100-SXM2-16GB, 4040/16160 MB allocatable, 80MCU
* Device #4: Tesla V100-SXM2-16GB, 4040/16160 MB allocatable, 80MCU

OpenCL Platform #2: The pocl project
====================================
* Device #5: pthread-Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, skipped.

Bitmap table overflowed at 18 bits.
This typically happens with too many hashes and reduces your performance.
You can increase the bitmap table size with --bitmap-max, but
this creates a trade-off between L2-cache and bitmap efficiency.
It is therefore not guaranteed to restore full performance.

Hashes: 551509767 digests; 551509767 unique digests, 1 unique salts
Bitmaps: 18 bits, 262144 entries, 0x0003ffff mask, 1048576 bytes, 5/13 rotates

Applicable optimizers:
* Zero-Byte
* Early-Skip
* Not-Salted
* Not-Iterated
* Single-Salt
* Brute-Force
* Raw-Hash

Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 256

ATTENTION! Pure (unoptimized) OpenCL kernels selected.
This enables cracking passwords and salts > length 32 but for the price of drastically reduced performance.
If you want to switch to optimized OpenCL kernels, append -O to your commandline.

Watchdog: Temperature abort trigger set to 90c

* Device #1: build_opts '-cl-std=CL1.2 -I OpenCL -I /usr/share/hashcat/OpenCL -D LOCAL_MEM_TYPE=1 -D VENDOR_ID=32 -D CUDA_ARCH=700 -D AMD_ROCM=0 -D VECT_SIZE=1 -D DEVICE_TYPE=4 -D DGST_R0=3 -D DGST_R1=4 -D DGST_R2=2 -D DGST_R3=1 -D DGST_ELEM=5 -D KERN_TYPE=100 -D _unroll'
clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE

Started: Thu Jan 17 23:24:44 2019
Stopped: Thu Jan 17 23:57:31 2019

it takes a long time to build up the dictionary, naturally, because the SHA1 file is massive. but can i avoid the crash somehow, or is my only option to split the file in smaller chunks?


RE: clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE - undeath - 01-18-2019

You need to split your hash file into smaller chunks.


RE: clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE - tecxx - 01-19-2019

ty! done and working now.