Hashcat benchmark comparator
#1
Lightbulb 
I recently developed a mini-suite of tools to process and compare hashcat benchmarks. The original intent was comparing the performance between using CUDA directly and using OpenCL.

The code is available on GitHub if anyone wants to use or review it/laugh at my code. It's written mostly in python, to parse, process and compare the hashcat results, and shell/bash, a single script to generate the benchmarks.

While doing the tests, I found some interesting diffs between the git version (v6.2.6-1320-g4a6b538b4+) and the release version on the Arch Linux repository (v6.2.6).

Comparing performance on the CUDA backend of both versions, where difference >50%:

Code:
{
  "old": [
    "LastPass + LastPass sniffed -> 19167.27%",
    "PKZIP (Compressed) -> 374.74%"
  ],
  "new": [
    "AIX {ssha1} -> 103.98%",
    "Cisco-IOS $9$ (scrypt) -> 60.05%",
    "PDF 1.4 - 1.6 (Acrobat 5 - 8) -> 57.46%",
    "Blockchain, My Wallet -> 192.90%",
    "DPAPI masterkey file v2 (context 3) -> 90.91%",
    "QNX /etc/shadow (MD5) -> 57.53%",
    "WPA-PMK-PMKID+EAPOL -> 464.19%",
    "Mozilla key3.db -> 287.52%",
    "NetNTLMv1 / NetNTLMv1+ESS (NT) -> 580.83%",
    "NetNTLMv2 (NT) -> 410.81%",
    "Flask Session Cookie ($salt.$salt.$pass) -> 263.46%"
  ]
}

This may be subject to quirks of my GPU (Nvidia GTX 1660), CPU (AMD Ryzen 7 3700X) and temperature fluctuations (the tests were run on my personal desktop). To normalize the data, I ran 5 sequential benchmarks for each hash type, with the "--benchmark-all" flag.

The full results are in a gist here: https://gist.github.com/whoisroot/5498a5...b8bc6646d4
Reply