<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title><![CDATA[hashcat Forum - hashcat]]></title>
		<link>https://hashcat.net/forum/</link>
		<description><![CDATA[hashcat Forum - https://hashcat.net/forum]]></description>
		<pubDate>Sat, 16 May 2026 00:38:56 +0000</pubDate>
		<generator>MyBB</generator>
		<item>
			<title><![CDATA[hashcat v7.1.0]]></title>
			<link>https://hashcat.net/forum/thread-13353.html</link>
			<pubDate>Sat, 16 Aug 2025 10:30:16 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-13353.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
Welcome to hashcat v7.1.0!<br />
<br />
Download binaries and source code from <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">hashcat</a> or from <a href="https://github.com/hashcat/hashcat" target="_blank" rel="noopener" class="mycode_url">GitHub</a><br />
<hr class="mycode_hr" />
<br />
This is a minor release, but an important one. It comes just two weeks after the major v7.0.0 update, as part of our effort to keep release cycles shorter than in the past.<br />
<br />
Although two weeks may seem quick, this version includes several important bug fixes along with notable new features and hash modes, which makes a v7.1.0 release fully justified.<br />
<br />
If you have 5 minutes, here's the full writeup: <a href="https://github.com/hashcat/hashcat/blob/v7.1.0/docs/releases_notes_v7.1.0.pdf" target="_blank" rel="noopener" class="mycode_url">Full Release Notes and detailed writeup</a><br />
<br />
<hr class="mycode_hr" />
New Algorithms:<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">AS/400 DES</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">AS/400 SSHA1</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Blockchain, My Wallet, Legacy Wallets</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Cisco-ISE Hashed Password (SHA256)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">LUKS2 (Argon2i KDF type)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">KeePass (KDBX v4)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">SAP CODVN H (PWDSALTEDHASH) isSHA512</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sm3crypt &#36;sm3&#36;, SM3 (Unix)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256(&#36;pass.&#36;salt)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256(&#36;salt.&#36;pass)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">MD6 (256)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(&#36;pass.&#36;salt)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(&#36;salt.&#36;pass)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(sha1(&#36;pass))</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(sha224(&#36;pass))</span><br />
</li>
</ul>
Single RTX 4090 in action on the improved Blockchain legacy Wallet support (see full writeup for details):<br />
<span style="font-family: Courier New;" class="mycode_font"><br />
<blockquote class="mycode_quote"><cite>Quote:</cite>---------------------------------------------------------<br />
* Hash-Mode 34700 (Blockchain, My Wallet, Legacy Wallets)<br />
---------------------------------------------------------<br />
Speed.#01........:  5516.8 MH/s (88.85ms) @ Accel:7 Loops:512 Thr:1024 Vec:1</blockquote>
</span><br />
<hr class="mycode_hr" />
New Features<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Attack-Modes: Use 64-bit counters for amplifier keyspace</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Host Memory: Update method to query free host memory</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Add initial support for running hashcat inside Docker</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Device Memory: Warn instead of waiting on high GPU memory usage</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">... a lot more</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">See full writeup for details</span><br />
</li>
</ul>
These new features are particularly relevant if you have encountered the following types of error messages, as they address them efficiently:<br />
<ul class="mycode_list"><li><span style="color: #c10300;" class="mycode_color">Integer overflow detected in ...</span><br />
</li>
<li><span style="color: #c10300;" class="mycode_color">Not enough allocatable device memory or free host memory ...</span><br />
</li>
</ul>
Here is a preview of running hashcat inside a Docker container:<br />
<span style="font-family: Courier New;" class="mycode_font"><br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; docker run --rm --gpus=all -it hashcat bash<br />
root@d1d5c5b61432:~/hashcat# ./hashcat.bin -I<br />
hashcat (v7.1.0) starting in backend information mode<br />
CUDA Info:<br />
==========<br />
CUDA.Version.: 12.9<br />
Backend Device ID #01<br />
  Name...........: NVIDIA GeForce RTX 4090<br />
  Processor(s)...: 128<br />
  Preferred.Thrd.: 32<br />
  Clock..........: 2565<br />
  Memory.Total...: 24080 MB<br />
  Memory.Free....: 23664 MB<br />
  Memory.Unified.: 0<br />
  Local.Memory...: 99 KB<br />
  PCI.Addr.BDFe..: 0000:01:00.0</blockquote>
</span><br />
<hr class="mycode_hr" />
Python Bridge<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Fix unsalted hashlist support</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Fix the esalt structure, it was too large</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Improve support from 1:1 password-to-hash to 1:N password-to-hashes</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Improve stand-alone debugging of Python Bridge stubs</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">See full writeup for details</span><br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
This release was made possible thanks to the work of the hashcat community. <br />
<br />
We appreciate the time, skill, and testing effort that went into it and especially from those submitting fixes, reporting bugs, and helping improve portability. <br />
<br />
- atom<br />
- matrix]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
Welcome to hashcat v7.1.0!<br />
<br />
Download binaries and source code from <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">hashcat</a> or from <a href="https://github.com/hashcat/hashcat" target="_blank" rel="noopener" class="mycode_url">GitHub</a><br />
<hr class="mycode_hr" />
<br />
This is a minor release, but an important one. It comes just two weeks after the major v7.0.0 update, as part of our effort to keep release cycles shorter than in the past.<br />
<br />
Although two weeks may seem quick, this version includes several important bug fixes along with notable new features and hash modes, which makes a v7.1.0 release fully justified.<br />
<br />
If you have 5 minutes, here's the full writeup: <a href="https://github.com/hashcat/hashcat/blob/v7.1.0/docs/releases_notes_v7.1.0.pdf" target="_blank" rel="noopener" class="mycode_url">Full Release Notes and detailed writeup</a><br />
<br />
<hr class="mycode_hr" />
New Algorithms:<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">AS/400 DES</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">AS/400 SSHA1</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Blockchain, My Wallet, Legacy Wallets</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Cisco-ISE Hashed Password (SHA256)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">LUKS2 (Argon2i KDF type)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">KeePass (KDBX v4)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">SAP CODVN H (PWDSALTEDHASH) isSHA512</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sm3crypt &#36;sm3&#36;, SM3 (Unix)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256(&#36;pass.&#36;salt)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">BLAKE2b-256(&#36;salt.&#36;pass)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">MD6 (256)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(&#36;pass.&#36;salt)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(&#36;salt.&#36;pass)</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(sha1(&#36;pass))</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">sha224(sha224(&#36;pass))</span><br />
</li>
</ul>
Single RTX 4090 in action on the improved Blockchain legacy Wallet support (see full writeup for details):<br />
<span style="font-family: Courier New;" class="mycode_font"><br />
<blockquote class="mycode_quote"><cite>Quote:</cite>---------------------------------------------------------<br />
* Hash-Mode 34700 (Blockchain, My Wallet, Legacy Wallets)<br />
---------------------------------------------------------<br />
Speed.#01........:  5516.8 MH/s (88.85ms) @ Accel:7 Loops:512 Thr:1024 Vec:1</blockquote>
</span><br />
<hr class="mycode_hr" />
New Features<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Attack-Modes: Use 64-bit counters for amplifier keyspace</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Host Memory: Update method to query free host memory</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Add initial support for running hashcat inside Docker</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Device Memory: Warn instead of waiting on high GPU memory usage</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">... a lot more</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">See full writeup for details</span><br />
</li>
</ul>
These new features are particularly relevant if you have encountered the following types of error messages, as they address them efficiently:<br />
<ul class="mycode_list"><li><span style="color: #c10300;" class="mycode_color">Integer overflow detected in ...</span><br />
</li>
<li><span style="color: #c10300;" class="mycode_color">Not enough allocatable device memory or free host memory ...</span><br />
</li>
</ul>
Here is a preview of running hashcat inside a Docker container:<br />
<span style="font-family: Courier New;" class="mycode_font"><br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; docker run --rm --gpus=all -it hashcat bash<br />
root@d1d5c5b61432:~/hashcat# ./hashcat.bin -I<br />
hashcat (v7.1.0) starting in backend information mode<br />
CUDA Info:<br />
==========<br />
CUDA.Version.: 12.9<br />
Backend Device ID #01<br />
  Name...........: NVIDIA GeForce RTX 4090<br />
  Processor(s)...: 128<br />
  Preferred.Thrd.: 32<br />
  Clock..........: 2565<br />
  Memory.Total...: 24080 MB<br />
  Memory.Free....: 23664 MB<br />
  Memory.Unified.: 0<br />
  Local.Memory...: 99 KB<br />
  PCI.Addr.BDFe..: 0000:01:00.0</blockquote>
</span><br />
<hr class="mycode_hr" />
Python Bridge<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Fix unsalted hashlist support</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Fix the esalt structure, it was too large</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Improve support from 1:1 password-to-hash to 1:N password-to-hashes</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Improve stand-alone debugging of Python Bridge stubs</span><br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">See full writeup for details</span><br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
This release was made possible thanks to the work of the hashcat community. <br />
<br />
We appreciate the time, skill, and testing effort that went into it and especially from those submitting fixes, reporting bugs, and helping improve portability. <br />
<br />
- atom<br />
- matrix]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v7.0.0]]></title>
			<link>https://hashcat.net/forum/thread-13330.html</link>
			<pubDate>Fri, 01 Aug 2025 21:16:46 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-13330.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
Welcome to hashcat v7.0.0!<br />
<br />
Download binaries and source code from <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">hashcat</a> or from <a href="https://github.com/hashcat/hashcat" target="_blank" rel="noopener" class="mycode_url">GitHub</a><br />
<hr class="mycode_hr" />
<br />
We're proud to announce the release of hashcat v7.0.0, the result of over two years of development, hundreds of features and fixes, and a complete refactor of several key components. This version also includes all accumulated changes from the v6.2.x minor releases.<br />
<br />
This release is huge. The full write-up is nearly 10,000 words, which exceeds what MyBB supports in a single post. <br />
<br />
If you have 30 minutes, here's the writeup: <a href="https://github.com/hashcat/hashcat/blob/v7.0.0/docs/releases_notes_v7.0.0.pdf" target="_blank" rel="noopener" class="mycode_url">Full Release Notes and detailed writeup</a><br />
<br />
Here's a quick summary:<br />
<ul class="mycode_list"><li>Over 900,000 lines of code changed<br />
</li>
<li>Contributions from 105 developers, including 74 first-time contributors<br />
</li>
<li>Merged and documented all previously unannounced 6.2.x features<br />
</li>
</ul>
<hr class="mycode_hr" />
Major New Features<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Assimilation Bridge</span>: Integrate external resources like CPUs, FPGAs, embedded interpreters, and more into the cracking pipeline. <br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Python Bridge Plugin</span>: Rapidly implement hash-matching logic in Python. No recompilation needed, supports multithreading and rule engine by default.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Virtual Backend Devices</span>: Internally partitions physical GPUs into multiple logical devices for better bridge integration and async workloads.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Hash-Mode Autodetection</span>: Omit the -m flag and let Hashcat detect the hash-mode, or use --identify to list possibilities.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Docker Build Support</span>: Build Hashcat in a fully containerized, cross-platform environment, including cross-compilation to Windows.<br />
</li>
</ul>
<hr class="mycode_hr" />
New Algorithm Support<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>58 new application-specific hash types, including Argon2, MetaMask, Microsoft Online Account, SNMPv3, GPG, OpenSSH, and LUKS2<br />
</li>
<li>17 new generic hash constructions used in real-world web apps and protocols<br />
</li>
<li>11 new primitives added to the crypto library, improving reuse and plugin development<br />
</li>
<li>20 new tools to extract hashes from popular sources, including APFS, Virtualbox, BitLocker, and various wallet formats<br />
</li>
</ul>
<hr class="mycode_hr" />
Performance Improvements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Complete refactor of the autotuning engine for better device utilization<br />
</li>
<li>Major rewrite of memory management to eliminate previous 4GB allocation caps and enable full memory usage across devices<br />
</li>
<li>Improved tuning for hash-modes like NTLM, NetNTLMv2, and RAR3<br />
</li>
<li>Updated tuning database entries and lower overhead for multi-device setups<br />
</li>
<li>Optimizations to several individual hash-modes including:<ul class="mycode_list"><li>scrypt: up to +320 percent<br />
</li>
<li>RAR3: up to +54 percent<br />
</li>
<li>NetNTLMv2: +223 percent (Intel)<br />
</li>
</ul>
</li>
</ul>
<div style="text-align: center;" class="mycode_align"><a href="https://docs.google.com/spreadsheets/d/1V43YK_SxVFyDJH8lw707wciovgVJW6qWUer8PfRK5Ew" target="_blank" rel="noopener" class="mycode_url">Full Benchmark Spreadsheet</a></div>
<br />
<hr class="mycode_hr" />
New and Updated Backends<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">HIP (AMD):</span> First-class support for AMD's HIP backend, now preferred over OpenCL when both are available<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Metal (Apple):</span> Native GPU support on macOS using Metal, including full Apple Silicon compatibility and major speed improvements<br />
</li>
</ul>
<hr class="mycode_hr" />
Plugin and Developer Changes<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Improved diagnostics, tokenizer control, and debugging options<br />
</li>
<li>New reusable infrastructure for integrating algorithms directly into both modules and kernels<br />
</li>
<li>Expanded test coverage, edge-case detection, and cross-platform compatibility improvements<br />
</li>
</ul>
<hr class="mycode_hr" />
Rule Engine Enhancements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Support for new character class logic and rejection rules, increasing rule engine flexibility<br />
</li>
<li>Refactored and cleaned up rule logic to improve reliability<br />
</li>
<li>Several commonly used rule files have been optimized and expanded<br />
</li>
</ul>
<hr class="mycode_hr" />
Additional Improvements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Mask engine now supports 8 custom charsets (-5 to -8)<br />
</li>
<li>Status screen improvements: better kernel info, new keybind support, improved quiet mode<br />
</li>
<li>New output formats: JSON support added to status and info commands<br />
</li>
<li>Improved benchmark defaults: better masks, longer duration, more consistent output<br />
</li>
<li>Updated 3rd-party dependencies and build fixes across all platforms<br />
</li>
<li>Added handling for compressed wordlists and better I/O error recovery<br />
</li>
<li>False-positive mitigation improvements across multiple formats<br />
</li>
</ul>
<hr class="mycode_hr" />
Bug Fixes<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Resolved memory allocation and buffer size issues across backends<br />
</li>
<li>Fixed bugs in output handling, restore files, and mask parsing<br />
</li>
<li>Corrected behavior for complex attack modes under edge conditions<br />
</li>
<li>Fixed missing or broken hash extractions for multiple formats<br />
</li>
<li>Eliminated false negatives in rare multihash cases<br />
</li>
</ul>
<hr class="mycode_hr" />
Final Words<br />
<hr class="mycode_hr" />
<br />
This release was made possible thanks to the work of the hashcat community. We appreciate the time, skill, and testing effort that went into it and especially from those submitting fixes, reporting bugs, and helping improve portability. <br />
<br />
While some parts took longer than expected, we believe the result is worth the wait. We're excited to see what the community builds on top of it. If you run into any issues, let us know on GitHub or better yet, send a fix. Thanks for your continued support.<br />
<br />
- atom<br />
- matrix]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
Welcome to hashcat v7.0.0!<br />
<br />
Download binaries and source code from <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">hashcat</a> or from <a href="https://github.com/hashcat/hashcat" target="_blank" rel="noopener" class="mycode_url">GitHub</a><br />
<hr class="mycode_hr" />
<br />
We're proud to announce the release of hashcat v7.0.0, the result of over two years of development, hundreds of features and fixes, and a complete refactor of several key components. This version also includes all accumulated changes from the v6.2.x minor releases.<br />
<br />
This release is huge. The full write-up is nearly 10,000 words, which exceeds what MyBB supports in a single post. <br />
<br />
If you have 30 minutes, here's the writeup: <a href="https://github.com/hashcat/hashcat/blob/v7.0.0/docs/releases_notes_v7.0.0.pdf" target="_blank" rel="noopener" class="mycode_url">Full Release Notes and detailed writeup</a><br />
<br />
Here's a quick summary:<br />
<ul class="mycode_list"><li>Over 900,000 lines of code changed<br />
</li>
<li>Contributions from 105 developers, including 74 first-time contributors<br />
</li>
<li>Merged and documented all previously unannounced 6.2.x features<br />
</li>
</ul>
<hr class="mycode_hr" />
Major New Features<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">Assimilation Bridge</span>: Integrate external resources like CPUs, FPGAs, embedded interpreters, and more into the cracking pipeline. <br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Python Bridge Plugin</span>: Rapidly implement hash-matching logic in Python. No recompilation needed, supports multithreading and rule engine by default.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Virtual Backend Devices</span>: Internally partitions physical GPUs into multiple logical devices for better bridge integration and async workloads.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Hash-Mode Autodetection</span>: Omit the -m flag and let Hashcat detect the hash-mode, or use --identify to list possibilities.<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Docker Build Support</span>: Build Hashcat in a fully containerized, cross-platform environment, including cross-compilation to Windows.<br />
</li>
</ul>
<hr class="mycode_hr" />
New Algorithm Support<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>58 new application-specific hash types, including Argon2, MetaMask, Microsoft Online Account, SNMPv3, GPG, OpenSSH, and LUKS2<br />
</li>
<li>17 new generic hash constructions used in real-world web apps and protocols<br />
</li>
<li>11 new primitives added to the crypto library, improving reuse and plugin development<br />
</li>
<li>20 new tools to extract hashes from popular sources, including APFS, Virtualbox, BitLocker, and various wallet formats<br />
</li>
</ul>
<hr class="mycode_hr" />
Performance Improvements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Complete refactor of the autotuning engine for better device utilization<br />
</li>
<li>Major rewrite of memory management to eliminate previous 4GB allocation caps and enable full memory usage across devices<br />
</li>
<li>Improved tuning for hash-modes like NTLM, NetNTLMv2, and RAR3<br />
</li>
<li>Updated tuning database entries and lower overhead for multi-device setups<br />
</li>
<li>Optimizations to several individual hash-modes including:<ul class="mycode_list"><li>scrypt: up to +320 percent<br />
</li>
<li>RAR3: up to +54 percent<br />
</li>
<li>NetNTLMv2: +223 percent (Intel)<br />
</li>
</ul>
</li>
</ul>
<div style="text-align: center;" class="mycode_align"><a href="https://docs.google.com/spreadsheets/d/1V43YK_SxVFyDJH8lw707wciovgVJW6qWUer8PfRK5Ew" target="_blank" rel="noopener" class="mycode_url">Full Benchmark Spreadsheet</a></div>
<br />
<hr class="mycode_hr" />
New and Updated Backends<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li><span style="font-weight: bold;" class="mycode_b">HIP (AMD):</span> First-class support for AMD's HIP backend, now preferred over OpenCL when both are available<br />
</li>
<li><span style="font-weight: bold;" class="mycode_b">Metal (Apple):</span> Native GPU support on macOS using Metal, including full Apple Silicon compatibility and major speed improvements<br />
</li>
</ul>
<hr class="mycode_hr" />
Plugin and Developer Changes<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Improved diagnostics, tokenizer control, and debugging options<br />
</li>
<li>New reusable infrastructure for integrating algorithms directly into both modules and kernels<br />
</li>
<li>Expanded test coverage, edge-case detection, and cross-platform compatibility improvements<br />
</li>
</ul>
<hr class="mycode_hr" />
Rule Engine Enhancements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Support for new character class logic and rejection rules, increasing rule engine flexibility<br />
</li>
<li>Refactored and cleaned up rule logic to improve reliability<br />
</li>
<li>Several commonly used rule files have been optimized and expanded<br />
</li>
</ul>
<hr class="mycode_hr" />
Additional Improvements<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Mask engine now supports 8 custom charsets (-5 to -8)<br />
</li>
<li>Status screen improvements: better kernel info, new keybind support, improved quiet mode<br />
</li>
<li>New output formats: JSON support added to status and info commands<br />
</li>
<li>Improved benchmark defaults: better masks, longer duration, more consistent output<br />
</li>
<li>Updated 3rd-party dependencies and build fixes across all platforms<br />
</li>
<li>Added handling for compressed wordlists and better I/O error recovery<br />
</li>
<li>False-positive mitigation improvements across multiple formats<br />
</li>
</ul>
<hr class="mycode_hr" />
Bug Fixes<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Resolved memory allocation and buffer size issues across backends<br />
</li>
<li>Fixed bugs in output handling, restore files, and mask parsing<br />
</li>
<li>Corrected behavior for complex attack modes under edge conditions<br />
</li>
<li>Fixed missing or broken hash extractions for multiple formats<br />
</li>
<li>Eliminated false negatives in rare multihash cases<br />
</li>
</ul>
<hr class="mycode_hr" />
Final Words<br />
<hr class="mycode_hr" />
<br />
This release was made possible thanks to the work of the hashcat community. We appreciate the time, skill, and testing effort that went into it and especially from those submitting fixes, reporting bugs, and helping improve portability. <br />
<br />
While some parts took longer than expected, we believe the result is worth the wait. We're excited to see what the community builds on top of it. If you run into any issues, let us know on GitHub or better yet, send a fix. Thanks for your continued support.<br />
<br />
- atom<br />
- matrix]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v6.2.0]]></title>
			<link>https://hashcat.net/forum/thread-10103.html</link>
			<pubDate>Fri, 14 May 2021 17:22:40 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-10103.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.2.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release includes a new attack-mode, expanded support for many new algorithms, and a number of bug fixes:<br />
<ul class="mycode_list"><li>Added hash-mode: Apple iWork<br />
</li>
<li>Added hash-mode: AxCrypt 2 AES-128<br />
</li>
<li>Added hash-mode: AxCrypt 2 AES-256<br />
</li>
<li>Added hash-mode: BestCrypt v3 Volume Encryption<br />
</li>
<li>Added hash-mode: Bitwarden<br />
</li>
<li>Added hash-mode: Dahua Authentication MD5<br />
</li>
<li>Added hash-mode: KNX IP Secure - Device Authentication Code<br />
</li>
<li>Added hash-mode: MongoDB ServerKey SCRAM-SHA-1<br />
</li>
<li>Added hash-mode: MongoDB ServerKey SCRAM-SHA-256<br />
</li>
<li>Added hash-mode: Mozilla key3.db<br />
</li>
<li>Added hash-mode: Mozilla key4.db<br />
</li>
<li>Added hash-mode: MS Office 2016 - SheetProtection<br />
</li>
<li>Added hash-mode: PDF 1.4 - 1.6 (Acrobat 5 - 8) - edit password<br />
</li>
<li>Added hash-mode: PKCS#8 Private Keys<br />
</li>
<li>Added hash-mode: RAR3-p (Compressed)<br />
</li>
<li>Added hash-mode: RAR3-p (Uncompressed)<br />
</li>
<li>Added hash-mode: RSA/DSA/EC/OPENSSH Private Keys<br />
</li>
<li>Added hash-mode: SolarWinds Orion v2<br />
</li>
<li>Added hash-mode: SolarWinds Serv-U<br />
</li>
<li>Added hash-mode: SQLCipher<br />
</li>
<li>Added hash-mode: Stargazer Stellar Wallet XLM<br />
</li>
<li>Added hash-mode: Stuffit5<br />
</li>
<li>Added hash-mode: Telegram Desktop &gt;= v2.1.14 (PBKDF2-HMAC-SHA512)<br />
</li>
<li>Added hash-mode: Umbraco HMAC-SHA1<br />
</li>
<li>Added hash-mode: sha1(&#36;salt.sha1(&#36;pass.&#36;salt))<br />
</li>
<li>Added hash-mode: sha1(sha1(&#36;pass).&#36;salt)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
The major feature in this release is the new attack-mode 9, called the "Association Attack". <br />
<br />
It's an attack similar to JtR's single mode where you use an username, a filename, a hint, or any other pieces of information which could have had an influence in the password generation to attack one specific hash. The important part is that hashcat will use the information only for one specific hash out of a list of many.<br />
<br />
Typically it's the username, but you are free to choose whatever piece of information you like. This speeds up clearing out easy passwords from large lists of salted hashes like bcrypt. The idea is that the more you clear in the beginning, the faster your attack is in general because hashcat can skip the cracked hashes in any subsequent attacks. <br />
<br />
For this attack-mode hashcat switches its workitem distribution strategy slightly in such a way that the top-level loop, which normally iterates through the different salts, is removed completely and instead each salt is assigned to a single GPU shader and that same shader computes the related information you provide. You can optionally apply rules to modify the candidates, creating groups of candidates per hash. They will be applied on the GPU, similar to normal `-r` usage. This can create enough work to fully utilize the GPU during this attack mode even for fast hashes.<br />
<br />
I've posted a more detailed write-up on how to use it here: <a href="https://hashcat.net/forum/thread-9534.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-9534.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
Another time consuming task included in this update was refactoring of the scrypt algorithm implementation. <br />
<br />
While it wasn't that bad to being with, it wasn't as good as it could be. The main problem was that it was declared as a slow hash, because it is a slow hash, but did not have any loop splitting in the kernel. Instead it assigned 1 to the iteration count statically and did all the loops in that one iteration. That's not great because typically the loop iteration count enables hashcat to step out of the loop every N iterations (that's what you set with the -u parameter) and return from the kernel. In that moment hashcat can update your status screen and the GPU driver has the chance to update the screen and other things. This will also prevent the driver watchdog from reseting the driver state due to a perceived kernel timeout (typically happens on windows only and sometimes causes the compute API to crash). All other slow hashes use this technique to act nice to the OS, but scrypt was not previously doing this. This part of the implementation was completely refactored. It now uses the N parameter from scrypt which typically is a large number - large enough for us to serve as entry point for a regular loop kernel.<br />
<br />
There are also several other scrypt related improvements including some of the most in-depth sections of the salsa algorithm having been optimized. For scrypt it is important to have our devices fine-tuned. This is a complicated task for a generic scrypt implementation like the one included in hashcat because it has to deal with many different scrypt parameters that are not fixed as they would be, for example, in a cryptocurrency miner setup. We need to tune them for each device and for each hash-mode to get the best results. I've posted a write-up on how to find the ideal tuning settings for your device here: <a href="https://github.com/hashcat/hashcat/blob/v6.2.0/hashcat.hctune#L388-L474" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/blob/...#L388-L474</a><br />
<br />
Some algorithms and devices greatly benefit from this kind of fine-tuning. For instance, on my GTX980 development GPU the speed of Cisco-IOS &#36;9&#36; (scrypt) doubled from 8107 H/s to 15662 H/s after the fine-tuning changes. On my Vega64 it tripled from 11554 H/s to 33082 H/s; this relates mostly to the manual tuning. In order to enable real fine-tuning of scrypt based algorithms, there are two new flags which plugin developers should check out: OPTS_TYPE_MP_MULTI_DISABLE and OPTS_TYPE_NATIVE_THREADS.<br />
<br />
<hr class="mycode_hr" />
<br />
Changelog features:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Added new attack-mode: Association Attack (aka "Context Attack") to attack hashes from a hashlist with associated "hints"<br />
</li>
<li>Added support for true UTF-8 to UTF-16 conversion in kernel crypto library<br />
</li>
<li>Added option --hash-info to show generic information for each hash-mode<br />
</li>
<li>Added command prompt [f]inish to tell hashcat to quit after finishing the current attack<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed access to filename which is a null-pointer in benchmark mode<br />
</li>
<li>Fixed both false negative and false positive results in -m 3000 in -a 3 (affecting only NVIDIA GPU)<br />
</li>
<li>Fixed buffer overflow in -m 1800 in -O mode which is optimized to handle only password candidates up to length 15<br />
</li>
<li>Fixed buffer overflow in -m 4710 in -P mode and only in single hash mode if salt length is larger than 32 bytes<br />
</li>
<li>Fixed hardware management sysfs readings in status screen (typically ROCm controlled GPUs)<br />
</li>
<li>Fixed include guards in several header files<br />
</li>
<li>Fixed incorrect maximum password length support for -m 400 in optimized mode (reduced from 55 to 39)<br />
</li>
<li>Fixed internal access on module option attribute OPTS_TYPE_SUGGEST_KG with the result that it was unused<br />
</li>
<li>Fixed invalid handling of outfile folder entries for -m 22000<br />
</li>
<li>Fixed memory leak causing problems in sessions with many iterations - for instance, --benchmark-all or large mask files<br />
</li>
<li>Fixed memory leaks in several cases of errors with access to temporary files<br />
</li>
<li>Fixed NVML initialization in WSL2 environments<br />
</li>
<li>Fixed out-of-boundary reads in cases where user activates -S for fast but pure hashes in -a 1 or -a 3 mode<br />
</li>
<li>Fixed out-of-boundary reads in kernels using module_extra_buffer_size() if -n is set to 1<br />
</li>
<li>Fixed password reassembling for cracked hashes on host for slow hashes in optimized mode that are longer than 32 characters<br />
</li>
<li>Fixed race condition in potfile check during removal of empty hashes<br />
</li>
<li>Fixed race condition resulting in out of memory error on startup if multiple hashcat instances are started at the same time<br />
</li>
<li>Fixed rare case of misalignment of the status prompt when other user warnings are shown in the hashcat output<br />
</li>
<li>Fixed search of tuning database - if a device was not assigned an alias, it couldn't be found in general<br />
</li>
<li>Fixed test on gzip header in wordlists and hashlists<br />
</li>
<li>Fixed too-early execution of some module functions that use non-final values opts_type and opti_type<br />
</li>
<li>Fixed unexpected non-unique salts in multi-hash cracking in Bitcoin/Litecoin wallet.dat module which led to false negatives<br />
</li>
<li>Fixed unit test for -m 3000 by preventing it to generate zero hashes<br />
</li>
<li>Fixed unit tests using 'null' as padding method in Crypt::CBC but actually want to use 'none'<br />
</li>
<li>Fixed unterminated salt buffer in -m 23400 module_hash_encode() in case salt was of length 256<br />
</li>
<li>Fixed vector datatype support in -m 21100 only -P mode and only -a 3 mode were affected<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Apple Keychain: Notify the user about the risk of collisions / false positives<br />
</li>
<li>CUDA Backend: Do not warn about missing CUDA SDK installation if --backend-ignore-cuda is used<br />
</li>
<li>CUDA Backend: Give detailed warning if either the NVIDIA CUDA or the NVIDIA RTC library cannot be initialized<br />
</li>
<li>CUDA Backend: Use blocking events to avoid 100% CPU core usage (per GPU)<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler deadlock on NVIDIA driver &gt;= 465.89<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler segfault on legacy AMDGPU driver compiling RAR3 OpenCL kernel<br />
</li>
<li>RAR3 Kernels: Improved loop code, improving performance by 23%<br />
</li>
<li>Scrypt Kernels: Added a number of GPU specific optimizations per hash modes to hashcat.hctune<br />
</li>
<li>Scrypt Kernels: Added detailed documentation on device specific tunings in hashcat.hctune<br />
</li>
<li>Scrypt Kernels: Optimized Salsa code portion by reducing register copies and removed unnecessary byte swaps<br />
</li>
<li>Scrypt Kernels: Reduced kernel wait times by making it a true split kernel where iteration count = N value<br />
</li>
<li>Scrypt Kernels: Refactored workload configuration strategy based on available resources<br />
</li>
<li>Startup time: Improved startup time by avoiding some time-intensive operations for skipped devices<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Bcrypt: Make BCRYPT entry for CPU in hashcat.hctune after switch to OPTS_TYPE_MP_MULTI_DISABLE (basically set -n to 1)<br />
</li>
<li>Benchmark: Update benchmark_deep.pl with new hash modes added (also new hash modes which were added with v6.1.0)<br />
</li>
<li>Building: Declare phony targets in Makefile to avoid conflicts of a target name with a file of the same name<br />
</li>
<li>Building: Fixed build warnings on macOS for unrar sources<br />
</li>
<li>Building: Fixed test for DARWIN_VERSION in Makefile<br />
</li>
<li>Commandline Options: Removed option --example-hashes, now an alias of --hash-info<br />
</li>
<li>Compute API: Skipping devices instead of stop if error occured in initialization<br />
</li>
<li>Documentation: Added 3rd party licenses to docs/license_libs<br />
</li>
<li>Hash-Mode 8900 (Scrypt): Changed default benchmark scrypt parameters from 1k:1:1 to 16k:8:1 (default)<br />
</li>
<li>Hash-Mode 11600 (7-Zip): Improved memory handling (alloc and free) for the hook function<br />
</li>
<li>Hash-Mode 13200 (AxCrypt): Changed the name to AxCrypt 1 to avoid confusion<br />
</li>
<li>Hash-Mode 13300 (AxCrypt in-memory SHA1): Changed the name to AxCrypt 1 in-memory SHA1<br />
</li>
<li>Hash-Mode 16300 (Ethereum Pre-Sale Wallet, PBKDF2-HMAC-SHA256): Use correct buffer size allocation for AES key<br />
</li>
<li>Hash-Mode 20710 (sha256(sha256(&#36;pass).&#36;salt)): Removed unused code and fixed module_constraints<br />
</li>
<li>Hash-Mode 22000 (WPA-PBKDF2-PMKID+EAPOL): Support loading a hash from command line<br />
</li>
<li>Hash-Mode 23300 (Apple iWork): Use correct buffer size allocation for AES key<br />
</li>
<li>Hash Parser: Output support for machine-readable hash lines in --show and --left and in error messages<br />
</li>
<li>Kernel Development: Kernel cache is disabled automatically when hashcat is compiled with DEBUG=1<br />
</li>
<li>Kernel Functions: Added generic AES-GCM interface see OpenCL/inc_cipher_aes-gcm.h<br />
</li>
<li>Kernel Functions: Refactored OpenCL/inc_ecc_secp256k1.cl many functions, add constants and documentation<br />
</li>
<li>Kernel Functions: Refactored OpenCL/inc_ecc_secp256k1.cl to improve usage in external programs<br />
</li>
<li>Kernel Functions: Wrap atomic functions with hc_ prefix. Custom kernels need to rename "atomic_inc()" to "hc_atomic_inc()"<br />
</li>
<li>Kernel Parameters: Added new parameter 'salt_repeat' to improve large buffer management<br />
</li>
<li>Module Parameters: Add OPTS_TYPE_MP_MULTI_DISABLE for use by plugin developers to prevent multiply -n with the MCU count<br />
</li>
<li>Module Parameters: Add OPTS_TYPE_NATIVE_THREADS for use by plugin developers to enforce native thread count<br />
</li>
<li>Module Structure: Add 3rd party library hook management functions. This also requires an update to all existing module_init()<br />
</li>
<li>OpenCL Runtime: Add support for clUnloadPlatformCompiler() to release some resources after JiT compilation<br />
</li>
<li>OpenCL Runtime: Switched default OpenCL device type on macOS from GPU to CPU. Use -D 2 to enable GPU devices<br />
</li>
<li>OpenCL Runtime: Update module_unstable_warnings() for all hash modes based on most recent versions of many OpenCL runtimes<br />
</li>
<li>Unit tests: Added 'potthrough' (like passthrough, but hash:plain) to tools/test.pl<br />
</li>
<li>Unit tests: Added Python 3 support for all of the Python code in our test framework<br />
</li>
<li>Unit tests: Fixed the packaging of test (-p) feature<br />
</li>
<li>Unit tests: Updated test.sh to show kernel type (pure or optimized) in output<br />
</li>
<li>Unit tests: Use python3/pip3 instead of just python/pip in tools/install_modules.sh<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.2.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release includes a new attack-mode, expanded support for many new algorithms, and a number of bug fixes:<br />
<ul class="mycode_list"><li>Added hash-mode: Apple iWork<br />
</li>
<li>Added hash-mode: AxCrypt 2 AES-128<br />
</li>
<li>Added hash-mode: AxCrypt 2 AES-256<br />
</li>
<li>Added hash-mode: BestCrypt v3 Volume Encryption<br />
</li>
<li>Added hash-mode: Bitwarden<br />
</li>
<li>Added hash-mode: Dahua Authentication MD5<br />
</li>
<li>Added hash-mode: KNX IP Secure - Device Authentication Code<br />
</li>
<li>Added hash-mode: MongoDB ServerKey SCRAM-SHA-1<br />
</li>
<li>Added hash-mode: MongoDB ServerKey SCRAM-SHA-256<br />
</li>
<li>Added hash-mode: Mozilla key3.db<br />
</li>
<li>Added hash-mode: Mozilla key4.db<br />
</li>
<li>Added hash-mode: MS Office 2016 - SheetProtection<br />
</li>
<li>Added hash-mode: PDF 1.4 - 1.6 (Acrobat 5 - 8) - edit password<br />
</li>
<li>Added hash-mode: PKCS#8 Private Keys<br />
</li>
<li>Added hash-mode: RAR3-p (Compressed)<br />
</li>
<li>Added hash-mode: RAR3-p (Uncompressed)<br />
</li>
<li>Added hash-mode: RSA/DSA/EC/OPENSSH Private Keys<br />
</li>
<li>Added hash-mode: SolarWinds Orion v2<br />
</li>
<li>Added hash-mode: SolarWinds Serv-U<br />
</li>
<li>Added hash-mode: SQLCipher<br />
</li>
<li>Added hash-mode: Stargazer Stellar Wallet XLM<br />
</li>
<li>Added hash-mode: Stuffit5<br />
</li>
<li>Added hash-mode: Telegram Desktop &gt;= v2.1.14 (PBKDF2-HMAC-SHA512)<br />
</li>
<li>Added hash-mode: Umbraco HMAC-SHA1<br />
</li>
<li>Added hash-mode: sha1(&#36;salt.sha1(&#36;pass.&#36;salt))<br />
</li>
<li>Added hash-mode: sha1(sha1(&#36;pass).&#36;salt)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
The major feature in this release is the new attack-mode 9, called the "Association Attack". <br />
<br />
It's an attack similar to JtR's single mode where you use an username, a filename, a hint, or any other pieces of information which could have had an influence in the password generation to attack one specific hash. The important part is that hashcat will use the information only for one specific hash out of a list of many.<br />
<br />
Typically it's the username, but you are free to choose whatever piece of information you like. This speeds up clearing out easy passwords from large lists of salted hashes like bcrypt. The idea is that the more you clear in the beginning, the faster your attack is in general because hashcat can skip the cracked hashes in any subsequent attacks. <br />
<br />
For this attack-mode hashcat switches its workitem distribution strategy slightly in such a way that the top-level loop, which normally iterates through the different salts, is removed completely and instead each salt is assigned to a single GPU shader and that same shader computes the related information you provide. You can optionally apply rules to modify the candidates, creating groups of candidates per hash. They will be applied on the GPU, similar to normal `-r` usage. This can create enough work to fully utilize the GPU during this attack mode even for fast hashes.<br />
<br />
I've posted a more detailed write-up on how to use it here: <a href="https://hashcat.net/forum/thread-9534.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-9534.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
Another time consuming task included in this update was refactoring of the scrypt algorithm implementation. <br />
<br />
While it wasn't that bad to being with, it wasn't as good as it could be. The main problem was that it was declared as a slow hash, because it is a slow hash, but did not have any loop splitting in the kernel. Instead it assigned 1 to the iteration count statically and did all the loops in that one iteration. That's not great because typically the loop iteration count enables hashcat to step out of the loop every N iterations (that's what you set with the -u parameter) and return from the kernel. In that moment hashcat can update your status screen and the GPU driver has the chance to update the screen and other things. This will also prevent the driver watchdog from reseting the driver state due to a perceived kernel timeout (typically happens on windows only and sometimes causes the compute API to crash). All other slow hashes use this technique to act nice to the OS, but scrypt was not previously doing this. This part of the implementation was completely refactored. It now uses the N parameter from scrypt which typically is a large number - large enough for us to serve as entry point for a regular loop kernel.<br />
<br />
There are also several other scrypt related improvements including some of the most in-depth sections of the salsa algorithm having been optimized. For scrypt it is important to have our devices fine-tuned. This is a complicated task for a generic scrypt implementation like the one included in hashcat because it has to deal with many different scrypt parameters that are not fixed as they would be, for example, in a cryptocurrency miner setup. We need to tune them for each device and for each hash-mode to get the best results. I've posted a write-up on how to find the ideal tuning settings for your device here: <a href="https://github.com/hashcat/hashcat/blob/v6.2.0/hashcat.hctune#L388-L474" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/blob/...#L388-L474</a><br />
<br />
Some algorithms and devices greatly benefit from this kind of fine-tuning. For instance, on my GTX980 development GPU the speed of Cisco-IOS &#36;9&#36; (scrypt) doubled from 8107 H/s to 15662 H/s after the fine-tuning changes. On my Vega64 it tripled from 11554 H/s to 33082 H/s; this relates mostly to the manual tuning. In order to enable real fine-tuning of scrypt based algorithms, there are two new flags which plugin developers should check out: OPTS_TYPE_MP_MULTI_DISABLE and OPTS_TYPE_NATIVE_THREADS.<br />
<br />
<hr class="mycode_hr" />
<br />
Changelog features:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Added new attack-mode: Association Attack (aka "Context Attack") to attack hashes from a hashlist with associated "hints"<br />
</li>
<li>Added support for true UTF-8 to UTF-16 conversion in kernel crypto library<br />
</li>
<li>Added option --hash-info to show generic information for each hash-mode<br />
</li>
<li>Added command prompt [f]inish to tell hashcat to quit after finishing the current attack<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed access to filename which is a null-pointer in benchmark mode<br />
</li>
<li>Fixed both false negative and false positive results in -m 3000 in -a 3 (affecting only NVIDIA GPU)<br />
</li>
<li>Fixed buffer overflow in -m 1800 in -O mode which is optimized to handle only password candidates up to length 15<br />
</li>
<li>Fixed buffer overflow in -m 4710 in -P mode and only in single hash mode if salt length is larger than 32 bytes<br />
</li>
<li>Fixed hardware management sysfs readings in status screen (typically ROCm controlled GPUs)<br />
</li>
<li>Fixed include guards in several header files<br />
</li>
<li>Fixed incorrect maximum password length support for -m 400 in optimized mode (reduced from 55 to 39)<br />
</li>
<li>Fixed internal access on module option attribute OPTS_TYPE_SUGGEST_KG with the result that it was unused<br />
</li>
<li>Fixed invalid handling of outfile folder entries for -m 22000<br />
</li>
<li>Fixed memory leak causing problems in sessions with many iterations - for instance, --benchmark-all or large mask files<br />
</li>
<li>Fixed memory leaks in several cases of errors with access to temporary files<br />
</li>
<li>Fixed NVML initialization in WSL2 environments<br />
</li>
<li>Fixed out-of-boundary reads in cases where user activates -S for fast but pure hashes in -a 1 or -a 3 mode<br />
</li>
<li>Fixed out-of-boundary reads in kernels using module_extra_buffer_size() if -n is set to 1<br />
</li>
<li>Fixed password reassembling for cracked hashes on host for slow hashes in optimized mode that are longer than 32 characters<br />
</li>
<li>Fixed race condition in potfile check during removal of empty hashes<br />
</li>
<li>Fixed race condition resulting in out of memory error on startup if multiple hashcat instances are started at the same time<br />
</li>
<li>Fixed rare case of misalignment of the status prompt when other user warnings are shown in the hashcat output<br />
</li>
<li>Fixed search of tuning database - if a device was not assigned an alias, it couldn't be found in general<br />
</li>
<li>Fixed test on gzip header in wordlists and hashlists<br />
</li>
<li>Fixed too-early execution of some module functions that use non-final values opts_type and opti_type<br />
</li>
<li>Fixed unexpected non-unique salts in multi-hash cracking in Bitcoin/Litecoin wallet.dat module which led to false negatives<br />
</li>
<li>Fixed unit test for -m 3000 by preventing it to generate zero hashes<br />
</li>
<li>Fixed unit tests using 'null' as padding method in Crypt::CBC but actually want to use 'none'<br />
</li>
<li>Fixed unterminated salt buffer in -m 23400 module_hash_encode() in case salt was of length 256<br />
</li>
<li>Fixed vector datatype support in -m 21100 only -P mode and only -a 3 mode were affected<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Apple Keychain: Notify the user about the risk of collisions / false positives<br />
</li>
<li>CUDA Backend: Do not warn about missing CUDA SDK installation if --backend-ignore-cuda is used<br />
</li>
<li>CUDA Backend: Give detailed warning if either the NVIDIA CUDA or the NVIDIA RTC library cannot be initialized<br />
</li>
<li>CUDA Backend: Use blocking events to avoid 100% CPU core usage (per GPU)<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler deadlock on NVIDIA driver &gt;= 465.89<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler segfault on legacy AMDGPU driver compiling RAR3 OpenCL kernel<br />
</li>
<li>RAR3 Kernels: Improved loop code, improving performance by 23%<br />
</li>
<li>Scrypt Kernels: Added a number of GPU specific optimizations per hash modes to hashcat.hctune<br />
</li>
<li>Scrypt Kernels: Added detailed documentation on device specific tunings in hashcat.hctune<br />
</li>
<li>Scrypt Kernels: Optimized Salsa code portion by reducing register copies and removed unnecessary byte swaps<br />
</li>
<li>Scrypt Kernels: Reduced kernel wait times by making it a true split kernel where iteration count = N value<br />
</li>
<li>Scrypt Kernels: Refactored workload configuration strategy based on available resources<br />
</li>
<li>Startup time: Improved startup time by avoiding some time-intensive operations for skipped devices<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Bcrypt: Make BCRYPT entry for CPU in hashcat.hctune after switch to OPTS_TYPE_MP_MULTI_DISABLE (basically set -n to 1)<br />
</li>
<li>Benchmark: Update benchmark_deep.pl with new hash modes added (also new hash modes which were added with v6.1.0)<br />
</li>
<li>Building: Declare phony targets in Makefile to avoid conflicts of a target name with a file of the same name<br />
</li>
<li>Building: Fixed build warnings on macOS for unrar sources<br />
</li>
<li>Building: Fixed test for DARWIN_VERSION in Makefile<br />
</li>
<li>Commandline Options: Removed option --example-hashes, now an alias of --hash-info<br />
</li>
<li>Compute API: Skipping devices instead of stop if error occured in initialization<br />
</li>
<li>Documentation: Added 3rd party licenses to docs/license_libs<br />
</li>
<li>Hash-Mode 8900 (Scrypt): Changed default benchmark scrypt parameters from 1k:1:1 to 16k:8:1 (default)<br />
</li>
<li>Hash-Mode 11600 (7-Zip): Improved memory handling (alloc and free) for the hook function<br />
</li>
<li>Hash-Mode 13200 (AxCrypt): Changed the name to AxCrypt 1 to avoid confusion<br />
</li>
<li>Hash-Mode 13300 (AxCrypt in-memory SHA1): Changed the name to AxCrypt 1 in-memory SHA1<br />
</li>
<li>Hash-Mode 16300 (Ethereum Pre-Sale Wallet, PBKDF2-HMAC-SHA256): Use correct buffer size allocation for AES key<br />
</li>
<li>Hash-Mode 20710 (sha256(sha256(&#36;pass).&#36;salt)): Removed unused code and fixed module_constraints<br />
</li>
<li>Hash-Mode 22000 (WPA-PBKDF2-PMKID+EAPOL): Support loading a hash from command line<br />
</li>
<li>Hash-Mode 23300 (Apple iWork): Use correct buffer size allocation for AES key<br />
</li>
<li>Hash Parser: Output support for machine-readable hash lines in --show and --left and in error messages<br />
</li>
<li>Kernel Development: Kernel cache is disabled automatically when hashcat is compiled with DEBUG=1<br />
</li>
<li>Kernel Functions: Added generic AES-GCM interface see OpenCL/inc_cipher_aes-gcm.h<br />
</li>
<li>Kernel Functions: Refactored OpenCL/inc_ecc_secp256k1.cl many functions, add constants and documentation<br />
</li>
<li>Kernel Functions: Refactored OpenCL/inc_ecc_secp256k1.cl to improve usage in external programs<br />
</li>
<li>Kernel Functions: Wrap atomic functions with hc_ prefix. Custom kernels need to rename "atomic_inc()" to "hc_atomic_inc()"<br />
</li>
<li>Kernel Parameters: Added new parameter 'salt_repeat' to improve large buffer management<br />
</li>
<li>Module Parameters: Add OPTS_TYPE_MP_MULTI_DISABLE for use by plugin developers to prevent multiply -n with the MCU count<br />
</li>
<li>Module Parameters: Add OPTS_TYPE_NATIVE_THREADS for use by plugin developers to enforce native thread count<br />
</li>
<li>Module Structure: Add 3rd party library hook management functions. This also requires an update to all existing module_init()<br />
</li>
<li>OpenCL Runtime: Add support for clUnloadPlatformCompiler() to release some resources after JiT compilation<br />
</li>
<li>OpenCL Runtime: Switched default OpenCL device type on macOS from GPU to CPU. Use -D 2 to enable GPU devices<br />
</li>
<li>OpenCL Runtime: Update module_unstable_warnings() for all hash modes based on most recent versions of many OpenCL runtimes<br />
</li>
<li>Unit tests: Added 'potthrough' (like passthrough, but hash:plain) to tools/test.pl<br />
</li>
<li>Unit tests: Added Python 3 support for all of the Python code in our test framework<br />
</li>
<li>Unit tests: Fixed the packaging of test (-p) feature<br />
</li>
<li>Unit tests: Updated test.sh to show kernel type (pure or optimized) in output<br />
</li>
<li>Unit tests: Use python3/pip3 instead of just python/pip in tools/install_modules.sh<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v6.1.0]]></title>
			<link>https://hashcat.net/forum/thread-9417.html</link>
			<pubDate>Tue, 28 Jul 2020 10:40:49 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-9417.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.1.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added hash-mode: Apple Keychain<br />
</li>
<li>Added hash-mode: XMPP SCRAM<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed integer overflow for large masks in -a 6 attack mode<br />
</li>
<li>Fixed alias detection with additional processor core count check<br />
</li>
<li>Fixed maximum password length in modules of hash-modes 600, 7800, 7801 and 9900<br />
</li>
<li>Fixed non-zero status code when using --stdout<br />
</li>
<li>Fixed uninitialized value in bitsliced DES kernel (BF mode only) leading to false negatives<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Compile ZLIB: Fixed makefile include paths in case USE_SYSTEM_ZLIB is used<br />
</li>
<li>Compile macOS: Fixed makefile target 'clean' to correctly remove *.dSYM folders<br />
</li>
<li>OpenCL Kernels: Added datatypes to literals of enum costants<br />
</li>
<li>OpenCL Kernels: Added pure kernels for hash-mode 600 (BLAKE2b-512)<br />
</li>
<li>OpenCL Runtime: Reinterpret return code CL_DEVICE_NOT_FOUND from clGetDeviceIDs() as non-fatal<br />
</li>
<li>OpenCL Runtime: Add some unstable warnings for some SHA512 based algorithms on AMD GPU on macOS<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Backend: Changed the maximum number of compute devices from 64 to 128<br />
</li>
<li>Tests: Improved tests for hash-mode 11300 (Bitcoin/Litecoin wallet.dat)<br />
</li>
<li>Tests: Improved tests for hash-mode 13200 (AxCrypt)<br />
</li>
<li>Tests: Improved tests for hash-mode 13600 (WinZip)<br />
</li>
<li>Tests: Improved tests for hash-mode 16400 (CRAM-MD5 Dovecot)<br />
</li>
<li>Tests: Improved tests for hash-mode 16800 (WPA-PMKID-PBKDF2)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.1.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added hash-mode: Apple Keychain<br />
</li>
<li>Added hash-mode: XMPP SCRAM<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed integer overflow for large masks in -a 6 attack mode<br />
</li>
<li>Fixed alias detection with additional processor core count check<br />
</li>
<li>Fixed maximum password length in modules of hash-modes 600, 7800, 7801 and 9900<br />
</li>
<li>Fixed non-zero status code when using --stdout<br />
</li>
<li>Fixed uninitialized value in bitsliced DES kernel (BF mode only) leading to false negatives<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Compile ZLIB: Fixed makefile include paths in case USE_SYSTEM_ZLIB is used<br />
</li>
<li>Compile macOS: Fixed makefile target 'clean' to correctly remove *.dSYM folders<br />
</li>
<li>OpenCL Kernels: Added datatypes to literals of enum costants<br />
</li>
<li>OpenCL Kernels: Added pure kernels for hash-mode 600 (BLAKE2b-512)<br />
</li>
<li>OpenCL Runtime: Reinterpret return code CL_DEVICE_NOT_FOUND from clGetDeviceIDs() as non-fatal<br />
</li>
<li>OpenCL Runtime: Add some unstable warnings for some SHA512 based algorithms on AMD GPU on macOS<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Backend: Changed the maximum number of compute devices from 64 to 128<br />
</li>
<li>Tests: Improved tests for hash-mode 11300 (Bitcoin/Litecoin wallet.dat)<br />
</li>
<li>Tests: Improved tests for hash-mode 13200 (AxCrypt)<br />
</li>
<li>Tests: Improved tests for hash-mode 13600 (WinZip)<br />
</li>
<li>Tests: Improved tests for hash-mode 16400 (CRAM-MD5 Dovecot)<br />
</li>
<li>Tests: Improved tests for hash-mode 16800 (WPA-PMKID-PBKDF2)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat 6.0.0]]></title>
			<link>https://hashcat.net/forum/thread-9303.html</link>
			<pubDate>Tue, 16 Jun 2020 15:37:04 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-9303.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.0.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
It has been a long time since the last release, and a long journey for hashcat 6.0.0 - which we are releasing today. It comes with a lot of performance improvements, new features and detailed documentations for both users and developers.<br />
<br />
In total, we had over 1800 Git commits since the last release (5.1.0) - from 29 different contributors. <br />
<br />
For a full list of contributors, please see: <a href="https://github.com/hashcat/hashcat/graphs/contributors?from=2018-12-02&amp;to=2020-06-16" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/graph...2020-06-16</a><br />
<br />
The previous release of hashcat was over one year ago, but hashcat constantly changes on a daily basis and has improved a lot in that time. We would like to release new hashcat versions more frequently in the future, but as you can see from the huge architectural changes below, this version is exceptional... Good things take time!<br />
<br />
<hr class="mycode_hr" />
<br />
The new major features of hashcat 6.0.0:<br />
<ul class="mycode_list"><li>New plugin interface - for modular hash-modes<br />
</li>
<li>New compute-backend API interface - for adding compute APIs other than OpenCL<br />
</li>
<li>CUDA added as a new compute-backend API<br />
</li>
<li>Comprehensive plugin developer guide<br />
</li>
<li>GPU Emulation mode - for using kernel code on the host CPU<br />
</li>
<li>Better GPU memory and thread management<br />
</li>
<li>Improved auto-tuning based on available resources<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Along with the major changes, we have added the following frequently demanded 51 new algorithms:<br />
<ul class="mycode_list"><li>AES Crypt (SHA256)<br />
</li>
<li>Android Backup<br />
</li>
<li>AuthMe sha256<br />
</li>
<li>BitLocker<br />
</li>
<li>BitShares v0.x<br />
</li>
<li>Blockchain, My Wallet, Second Password (SHA256)<br />
</li>
<li>Citrix NetScaler (SHA512)<br />
</li>
<li>DiskCryptor<br />
</li>
<li>Electrum Wallet (Salt-Type 3-5)<br />
</li>
<li>Huawei Router sha1(md5(&#36;pass).&#36;salt)<br />
</li>
<li>Java Object hashCode()<br />
</li>
<li>Kerberos 5 Pre-Auth etype 17 (AES128-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 Pre-Auth etype 18 (AES256-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 TGS-REP etype 17 (AES128-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 TGS-REP etype 18 (AES256-CTS-HMAC-SHA1-96)<br />
</li>
<li>MultiBit Classic .key (MD5)<br />
</li>
<li>MultiBit HD (scrypt)<br />
</li>
<li>MySQL &#36;A&#36; (sha256crypt)<br />
</li>
<li>Open Document Format (ODF) 1.1 (SHA-1, Blowfish)<br />
</li>
<li>Open Document Format (ODF) 1.2 (SHA-256, AES)<br />
</li>
<li>Oracle Transportation Management (SHA256)<br />
</li>
<li>PKZIP archive encryption<br />
</li>
<li>PKZIP Master Key<br />
</li>
<li>Python passlib pbkdf2-sha1<br />
</li>
<li>Python passlib pbkdf2-sha256<br />
</li>
<li>Python passlib pbkdf2-sha512<br />
</li>
<li>QNX /etc/shadow (MD5)<br />
</li>
<li>QNX /etc/shadow (SHA256)<br />
</li>
<li>QNX /etc/shadow (SHA512)<br />
</li>
<li>RedHat 389-DS LDAP (PBKDF2-HMAC-SHA256)<br />
</li>
<li>Ruby on Rails Restful-Authentication<br />
</li>
<li>SecureZIP AES-128<br />
</li>
<li>SecureZIP AES-192<br />
</li>
<li>SecureZIP AES-256<br />
</li>
<li>SolarWinds Orion<br />
</li>
<li>Telegram Desktop App Passcode (PBKDF2-HMAC-SHA1)<br />
</li>
<li>Telegram Mobile App Passcode (SHA256)<br />
</li>
<li>Web2py pbkdf2-sha512<br />
</li>
<li>WPA-PBKDF2-PMKID+EAPOL<br />
</li>
<li>WPA-PMK-PMKID+EAPOL<br />
</li>
<li>md5(&#36;salt.sha1(&#36;salt.&#36;pass))<br />
</li>
<li>md5(sha1(&#36;pass).md5(&#36;pass).sha1(&#36;pass))<br />
</li>
<li>md5(sha1(&#36;salt).md5(&#36;pass))<br />
</li>
<li>sha1(md5(md5(&#36;pass)))<br />
</li>
<li>sha1(md5(&#36;pass.&#36;salt))<br />
</li>
<li>sha1(md5(&#36;pass).&#36;salt)<br />
</li>
<li>sha1(&#36;salt1.&#36;pass.&#36;salt2)<br />
</li>
<li>sha256(md5(&#36;pass))<br />
</li>
<li>sha256(&#36;salt.&#36;pass.&#36;salt)<br />
</li>
<li>sha256(sha256_bin(&#36;pass))<br />
</li>
<li>sha256(sha256(&#36;pass).&#36;salt)<br />
</li>
</ul>
With so many new hash-modes added, we're happy to announce that we now support over 320 different algorithms!<br />
<br />
<hr class="mycode_hr" />
<br />
And here's a preview of some of the performance improvements:<br />
<ul class="mycode_list"><li>MD5: 8.05%<br />
</li>
<li>NTLM: 13.70%<br />
</li>
<li>Domain Cached Credentials (DCC), MS Cache: 11.91%<br />
</li>
<li>Domain Cached Credentials 2 (DCC2), MS Cache 2: 12.51%<br />
</li>
<li>NetNTLMv1: 15.79%<br />
</li>
<li>NetNTLMv2: 6.98%<br />
</li>
<li>WPA/WPA2: 13.35%<br />
</li>
<li>sha256crypt &#36;5&#36;, SHA256 (Unix): 8.77%<br />
</li>
<li>sha512crypt &#36;6&#36;, SHA512 (Unix): 20.33%<br />
</li>
<li>bcrypt: 45.58%<br />
</li>
<li>IPMI2 RAKP HMAC-SHA1: 20.03%<br />
</li>
<li>SAP CODVN B (BCODE): 32.37%<br />
</li>
<li>Blockchain, My Wallet: 31.00%<br />
</li>
<li>Electrum Wallet (Salt-Type 1-3): 109.46%<br />
</li>
<li>WinZip: 119.43%<br />
</li>
</ul>
For a full list of all improvements, please see here: <a href="https://docs.google.com/spreadsheets/d/1CK02Qm4GLG8clCrqUB1EklhHBk37_WgIHWKMTtOilAY/edit?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...sp=sharing</a><br />
<br />
<hr class="mycode_hr" />
<br />
In addition to these, there are a number of other new features and changes - but in this post, we want to focus mainly on the major changes to keep the release notes to a digestible length. For those interested, the changelog and git history have a more complete list of all changes.<br />
<br />
Most of these changes are aimed at developers. These release notes are intentionally verbose to inform current contributors and developers, as well as to catch the interest of potential future hashcat contributors.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Plugin Interface<br />
<br />
<hr class="mycode_hr" />
<br />
One of the first things you will notice after unpacking the new hashcat version is the new modules folder. We have had modularity in mind for a long time, and have finally managed to implement it: each and every hash type is separated into its own module. This not only makes the code much easier to read, write, and maintain, but it also comes with a very nice new architecture, interface, and added flexibility.<br />
<br />
In essence, this is actually just some overdue refactorization, but it comes with a lot of benefits for developers working on new hash types: it makes it much easier to write new host code (including parsers, decoders, encoders, hooks etc). The hash type code is 100% separated from the core code, meaning that there's no more need to edit the hashcat core sources to add a new hash type. This also enables much easier distribution of custom kernels which have not been pushed to the main repository.<br />
<br />
During more than four months of "conversion" of the old hash types, we designed a new common interface and made all existing hash modes work with this new plugin interface. We even created a new testing framework, and converted all the old testing modules. See the tools/test_modules/ folder for more information.<br />
<br />
A huge thanks to everyone helping to convert hash types and/or tests for these new interfaces. This has not only shown us that the new interface works great and is flexible enough to cover all the different needs from the different modules, but it also shows that contributors are able to easily write modular code.<br />
<br />
The new fully modularized hash-type integration makes the hash-type-specific code more compact and encapsulated, but also maintains and even adds flexibility. For instance, it is now possible to easily add hash-mode specific JiT (just-in-time) compiler flags which are used at kernel compilation runtime, or easily mark hash-mode specific unstable warnings on some specific setups (for instance depending on driver and hardware). One could easily add new restriction and limitations directly to the module, without cluttering other parts of hashcat (to avoid "spaghetti code" and "special cases" everywhere).<br />
<br />
There is a lot to say about this new architecture that we've designed and we could go into much further detail, but we will do our best to not go too far here. Fortunately, for everybody interested, we also wrote a hashcat plugin interface guide for developers. This guide is the first official "how to add a new hash type" document, it already consisting of almost 20,000 words. It does not cover every detail, but it gives you everything you need to get started adding your own hash type. <br />
<br />
Be prepared, because reading will take a bit of time. You can find it here: <a href="https://github.com/hashcat/hashcat/blob/master/docs/hashcat-plugin-development-guide.md" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/blob/...t-guide.md</a><br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Backend Interface<br />
<br />
<hr class="mycode_hr" />
<br />
Similar to the Plugin Interface feature, it took us quite some time and effort to refactor how we deal with supporting different compute devices in hashcat. <br />
<br />
As you will notice, we have changed many command line parameters to --backend-* as replacements of the old --opencl-* parameters. The reason for this is that hashcat now has a more flexible architecture for how we deal with different backends (like CUDA/OpenCL etc). With this system, we can add additional backends in an elegant way whenever we may need to in the future.<br />
<br />
The system is designed in such a way that backend-specific code is abstracted away from other operations (like loading the kernel source code etc) and uses a common interface which makes the code much more readable and easy to use.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: CUDA Support<br />
<br />
<hr class="mycode_hr" />
<br />
This is basically an "application" of the new Backend Interface feature. With the new architecture for hashcat backends, we were able to start supporting CUDA for NVIDIA devices. By NVIDIA devices, we mean any of their compute devices that support CUDA, not just discrete GPUs! This enables hashcat to run on chips such as the NVIDIA Jetson or NVIDIA Xavier. This also enables us to utilize CUDA on platforms where NVIDIA does not release a driver capable of OpenCL, including ARM platforms and IBM POWER9 platforms.<br />
<br />
There are several other advantages that CUDA has over OpenCL on NVIDIA devices, but the most important one is that the entire amount of GPU memory in a single block allocation can now be unlocked by the user by installing the CUDA Toolkit.<br />
<br />
We recommend (at the time of this writing) installing the CUDA Toolkit without the NVIDIA driver it ships with and install the latest/recommended driver from nvidia.com instead. Hashcat will actually warn you if you have an NVIDIA device, but "only" use the OpenCL driver, because you <span style="font-weight: bold;" class="mycode_b">should</span> install the CUDA Toolkit for CUDA-supported devices. This step is mandatory if you want to use CUDA instead of OpenCL backend because there's no JiT compiler for CUDA the way how it comes already built-in for the OpenCL with the NVIDIA drivers.<br />
<br />
Hashcat will list all the devices (CUDA devices in addition to OpenCL devices) with --backend-info (short: -I) and you can easily select the devices you want with --backend-devices (short: -d). Of course, hashcat prefers the "CUDA devices" if available! (and for the curious reader: no, you can't actually use OpenCL and CUDA at the same time for the same device in hashcat - we call this an alias. The speed will NOT double this way <img src="https://hashcat.net/forum/images/smilies/tongue.gif" alt="Tongue" title="Tongue" class="smilie smilie_5" />)<br />
<br />
One of the biggest advantages of CUDA compared to OpenCL is the full use of shared memory (sometimes also called Local Memory). In OpenCL, there's a minimum of 1 byte reserved by OpenCL which has bigger implications than may be apparent at first. For example, most NVIDIA cards have 48kb of shared memory. To efficiently compute bcrypt, each thread requires 4k of this shared memory pool. This means that on CUDA we are able to use 12 threads on bcrypt instead of just 11 threads with OpenCL. This and other optimizations are the reason we improved the performance of bcrypt by 46.90%.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Emulation Mode<br />
<br />
<hr class="mycode_hr" />
<br />
This feature is basically introducing a very nice way to use kernel code within modules or host code. We came up with this strategy for the following reasons:<br />
<ul class="mycode_list"><li>For complex kernels with lots of code, it's easier to debug the code on the host side as a standalone project. This also saves long startup times and increases development speed.<br />
</li>
<li>Reuse complex kernel code from within the module. A good example is WPA EAPOL/PMKID, where we compute the last steps in the module parser with the goal to find already cracked hashes in the potfile. This saves maintaining the same code on two ends.<br />
</li>
<li>Reuse kernel code in order to precompute values in parsers. As a very easy example, consider some "flawed" algo that uses md5(&#36;salt) within the algorithm. In this case we could just precompute this MD5 in the host. <br />
</li>
</ul>
The "emulated" code is basically shared between the OpenCL/CUDA code and the host code and can, for instance, be directly included by the module (for instance src/modules/module_19100.c includes emu_inc_hash_sha256.h). This way we also avoid duplicated code and guarantee that the host code also uses the most optimized code.<br />
<br />
It is actually quite easy to use for developers. For an example of some basic hashing algos being used directly in modules with this new emulation mode, just take a look at existing modules (like -m 19100 as mentioned above) or glance at the include/emu_*.h files.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Memory/Thread Management<br />
<br />
<hr class="mycode_hr" />
<br />
Lastly, a feature that is less directed at hashcat devs/contributors, but is still interesting and very important when it comes to performance: improvement in memory and thread management to reach maximum performance.<br />
<br />
Hashcat 6.0.0 introduces a new way that threads and device memory (VRAM) are used and optimized: with the addition of a new automatic workload tuner, we try to guarantee maximum performance depending on the available memory, hash type, attack mode, amplifiers (e.g. rules) etc. We basically changed the thread management from a "native" thread count for GPU to maximum possible threads. We've also added a command line parameter: --kernel-threads (short: -T) if you want to play with this and set the amount of threads manually.<br />
<br />
This obviously comes with a very nice performance gain depending on hash type, attack mode etc.<br />
<br />
<hr class="mycode_hr" />
<br />
Changelog Features:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Refactored hash-mode integration and replaced it with a fully modularized plugin interface<br />
</li>
<li>Converted all existing hardwired hash-modes to hashcat plugins<br />
</li>
<li>Added comprehensive plugin developer guide on adding new/custom hash-modes to hashcat<br />
</li>
<li>Refactored compute backend interface to allow adding compute API other than OpenCL<br />
</li>
<li>Added CUDA as a new compute backend (enables hashcat to run on NVIDIA Jetson, IBM POWER9 w/ Nvidia V100, etc.)<br />
</li>
<li>Support automatic use of all available GPU memory when using CUDA backend<br />
</li>
<li>Support automatic use of all available CPU cores for hash-mode-specific hooks<br />
</li>
<li>Support on-the-fly loading of compressed wordlists in zip and gzip format<br />
</li>
<li>Support deflate decompression for the 7-Zip hash-mode using zlib hook<br />
</li>
<li>Added additional documentation on hashcat brain, slow-candidate interface and keyboard-layout mapping features<br />
</li>
<li>Keep output of --show and --left in the original ordering of the input hash file<br />
</li>
<li>Improved performance of many hash-modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed buffer overflow in build_plain() function<br />
</li>
<li>Fixed buffer overflow in mp_add_cs_buf() function<br />
</li>
<li>Fixed calculation of brain-session ID - only the first hash of the hashset was taken into account<br />
</li>
<li>Fixed cleanup of password candidate buffers on GPU as set from autotune when -n parameter was used<br />
</li>
<li>Fixed copy/paste error leading to invalid "Integer overflow detected in keyspace of mask" in attack-mode 6 and 7<br />
</li>
<li>Fixed cracking multiple Office hashes (modes 9500, 9600) if hashes shared the same salt<br />
</li>
<li>Fixed cracking of Blockchain, My Wallet (V1 and V2) hashes when testing decrypted data in unexpected format<br />
</li>
<li>Fixed cracking of Cisco-PIX and Cisco-ASA MD5 passwords in mask-attack mode when mask &gt; length 16<br />
</li>
<li>Fixed cracking of DNSSEC (NSEC3) hashes by replacing all dots in the passwords with lengths<br />
</li>
<li>Fixed cracking of Electrum Wallet Salt-Type 2 hashes<br />
</li>
<li>Fixed cracking of NetNTLMv1 passwords in mask-attack mode when mask &gt; length 16 (optimized kernels only)<br />
</li>
<li>Fixed cracking of RAR3-hp hashes with pure kernel for passwords longer than 28 bytes<br />
</li>
<li>Fixed cracking of VeraCrypt Streebog-512 hashes (CPU only)<br />
</li>
<li>Fixed cracking raw Streebog-HMAC 256 and 512 hashes for passwords of length &gt;= 64<br />
</li>
<li>Fixed cracking raw Whirlpool hashes cracking for passwords of length &gt;= 32<br />
</li>
<li>Fixed incorrect progress-only result in a special race condition<br />
</li>
<li>Fixed invalid call of mp_css_utf16le_expand()/mp_css_utf16be_expand() in slow-candidate sessions<br />
</li>
<li>Fixed invalid password truncation in attack-mode 1 when the final password is longer than 32 characters<br />
</li>
<li>Fixed invalid use of --hex-wordlist if encoded wordlist string is larger than length 256<br />
</li>
<li>Fixed maximum password length limit which was announced as 256 but was actually 255<br />
</li>
<li>Fixed out-of-boundary read in pure kernel rule engine rule 'p' when parameter was set to 2 or higher<br />
</li>
<li>Fixed out-of-boundary write to decrypted[] in DPAPI masterkey file v1 kernel<br />
</li>
<li>Fixed output of IKE PSK (mode 5300 and 5400) hashes to use separators in the correct position<br />
</li>
<li>Fixed output password of "e" rule in pure and CPU rule engine when separator character is also the first letter<br />
</li>
<li>Fixed problem with usage of hexadecimal notation (\x00-\xff) within rules<br />
</li>
<li>Fixed race condition in maskfile mode by using a dedicated flag for restore execution<br />
</li>
<li>Fixed some memory leaks when hashcat is shutting down due to some file error<br />
</li>
<li>Fixed some memory leaks when mask-files are used in optimized mode<br />
</li>
<li>Fixed --status-json to correctly escape certain characters in hashes<br />
</li>
<li>Fixed the 7-Zip parser to allow the entire supported range of encrypted and decrypted data lengths<br />
</li>
<li>Fixed the validation of the --brain-client-features command line argument (only values 1, 2 or 3 are allowed)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Bitcoin Wallet: Be more user friendly by allowing a larger data range for ckey and public_key<br />
</li>
<li>Brain: Added new parameter --brain-server-timer to specify seconds between scheduled backups<br />
</li>
<li>Building: Fix for library compilation failure due to multiple defenition of sbob_xx64()<br />
</li>
<li>Cracking bcrypt and Password Safe v2: Use feedback from the compute API backend to dynamically calculate optimal thread count<br />
</li>
<li>Dictstat: On Windows, the st_ino attribute in the stat struct is not set, which can lead to invalid cache hits. Added the filename to the database entry.<br />
</li>
<li>Documents: Added README on how to build hashcat on Cygwin, MSYS2 and WSL<br />
</li>
<li>File handling: Print a truncation warning when an oversized line is detected<br />
</li>
<li>My Wallet: Added additional plaintext pattern used in newer versions<br />
</li>
<li>Office cracking: Support hash format with second block data for 40-bit oldoffice files (eliminates false positives)<br />
</li>
<li>OpenCL Runtime: Added a warning if OpenCL runtime NEO, Beignet, POCL (v1.4 or older) or MESA is detected, and skip associated devices (override with --force)<br />
</li>
<li>OpenCL Runtime: Allow the kernel to access post-48k shared memory region on CUDA. Requires both module and kernel preparation<br />
</li>
<li>OpenCL Runtime: Disable OpenCL kernel cache on Apple for Intel CPU (throws CL_BUILD_PROGRAM_FAILURE for no reason)<br />
</li>
<li>OpenCL Runtime: Do not run shared- or constant-memory size checks if their memory type is of type global memory (typically CPU)<br />
</li>
<li>OpenCL Runtime: Improve ROCm detection and make sure to not confuse with recent AMDGPU drivers<br />
</li>
<li>OpenCL Runtime: Not using amd_bytealign (amd_bitalign is fine) on AMDGPU driver drastically reduces JiT segfaults<br />
</li>
<li>OpenCL Runtime: Unlocked maximum thread count for NVIDIA GPU<br />
</li>
<li>OpenCL Runtime: Update unstable mode warnings for Apple and AMDGPU drivers<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler error on AMDGPU driver compiling WPA-EAPOL-PBKDF2 OpenCL kernel<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler error on ROCm 2.3 driver if the 'inline' keyword is used in function declaration<br />
</li>
<li>OpenCL Runtime: Workaround memory allocation error on AMD driver on Windows leading to CL_MEM_OBJECT_ALLOCATION_FAILURE<br />
</li>
<li>OpenCL Runtime: Removed some workarounds by calling chdir() to specific folders on startup<br />
</li>
<li>Outfile: Added new system to specify the outfile format, the new --outfile-format now also supports timestamps<br />
</li>
<li>Startup Checks: Improved the pidfile check: Do not just check for existing PID, but also check executable filename<br />
</li>
<li>Startup Checks: Prevent the user from modifying options which are overwritten automatically in benchmark mode<br />
</li>
<li>Startup Screen: Add extra warning when using --force<br />
</li>
<li>Startup Screen: Add extra warning when using --keep-guessing<br />
</li>
<li>Startup Screen: Provide an estimate of host memory required for the requested attack<br />
</li>
<li>Status Screen: Added brain status for all compute devices<br />
</li>
<li>Status Screen: Added remaining counts and changed recovered count logic<br />
</li>
<li>Status Screen: Added --status-json flag for easier machine reading of hashcat status output<br />
</li>
<li>Tab Completion: Allow using "make install" version of hashcat<br />
</li>
<li>Tuning Database: Updated hashcat.hctune with new models and refreshed vector width values<br />
</li>
<li>VeraCrypt: Added support for VeraCrypt PIM brute-force, replaced --veracrypt-pim with --veracrypt-pim-start and --veracrypt-pim-stop<br />
</li>
<li>WipZip cracking: Added two byte early reject, resulting in higher cracking speed<br />
</li>
<li>WPA/WPA2 cracking: In the potfile, replace password with PMK in order to detect already cracked networks across all WPA modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Backend Interface: Added new options --backend-ignore-cuda and --backend-ingore-opencl to prevent CUDA and/or OpenCL API from being used<br />
</li>
<li>Binary Distribution: Removed 32-bit binary executables<br />
</li>
<li>Building: On macOS, switch from ar to /usr/bin/ar to improve building compatibility<br />
</li>
<li>Building: Skipping Travis/Appveyor build for non-code changes<br />
</li>
<li>Codebase: Cleanup of many unused rc_* variables<br />
</li>
<li>Codebase: Fixed some printf() format arguments<br />
</li>
<li>Codebase: Fixed some type casting to avoid truncLongCastAssignment warnings<br />
</li>
<li>Codebase: Moved hc_* file functions from shared.c to filehandling.c<br />
</li>
<li>Codebase: Ran through a bunch of clang-tidy checkers and updated code accordingly<br />
</li>
<li>Codebase: Remove redundant calls to fclose()<br />
</li>
<li>Dependencies: Updated LZMA-Headers from 18.05 to 19.00<br />
</li>
<li>Dependencies: Updated OpenCL-Headers to latest version from GitHub master repository<br />
</li>
<li>Hash-Mode 12500 (RAR3-hp): Allow cracking of passwords up to length 64<br />
</li>
<li>Hash-mode 1460 (HMAC-SHA256 (key = &#36;salt)): Allow up to 64 byte of salt<br />
</li>
<li>Hash-Mode 1680x (WPA-PMKID) specific: Changed separator character from '*' to ':'<br />
</li>
<li>Hash-Mode 8300 (DNSSEC (NSEC3)) specific: Allow empty salt<br />
</li>
<li>Keep Guessing: No longer automatically activate --keep-guessing for modes 9720, 9820, 14900 and 18100<br />
</li>
<li>Keep Guessing: No longer mark hashes as cracked/removed when in potfile<br />
</li>
<li>Kernel Cache: Reactivate OpenCL runtime specific kernel caches<br />
</li>
<li>Kernel Compile: Removed -cl-std= from all kernel build options since we're compatible to all OpenCL versions<br />
</li>
<li>OpenCL Kernels: Fix OpenCL compiler warning on double precision constants<br />
</li>
<li>OpenCL Kernels: Moved "gpu_decompress", "gpu_memset" and "gpu_atinit" into shared.cl in order to reduce compile time<br />
</li>
<li>OpenCL Options: Removed --opencl-platforms filter in order to force backend device numbers to stay constant<br />
</li>
<li>OpenCL Options: Set --spin-damp to 0 (disabled) by default. With the CUDA backend this workaround became deprecated<br />
</li>
<li>Parsers: switched from strtok() to strtok_r() for thread safety<br />
</li>
<li>Requirements: Add new requirement for NVIDIA GPU: CUDA Toolkit (9.0 or later)<br />
</li>
<li>Requirements: Update runtime check for minimum NVIDIA driver version from 367.x to 440.64 or later<br />
</li>
<li>Test Script: Switched from /bin/bash to generic /bin/sh and updated code accordingly<br />
</li>
</ul>
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v6.0.0!<br />
<br />
Download binaries and source code from: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
It has been a long time since the last release, and a long journey for hashcat 6.0.0 - which we are releasing today. It comes with a lot of performance improvements, new features and detailed documentations for both users and developers.<br />
<br />
In total, we had over 1800 Git commits since the last release (5.1.0) - from 29 different contributors. <br />
<br />
For a full list of contributors, please see: <a href="https://github.com/hashcat/hashcat/graphs/contributors?from=2018-12-02&amp;to=2020-06-16" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/graph...2020-06-16</a><br />
<br />
The previous release of hashcat was over one year ago, but hashcat constantly changes on a daily basis and has improved a lot in that time. We would like to release new hashcat versions more frequently in the future, but as you can see from the huge architectural changes below, this version is exceptional... Good things take time!<br />
<br />
<hr class="mycode_hr" />
<br />
The new major features of hashcat 6.0.0:<br />
<ul class="mycode_list"><li>New plugin interface - for modular hash-modes<br />
</li>
<li>New compute-backend API interface - for adding compute APIs other than OpenCL<br />
</li>
<li>CUDA added as a new compute-backend API<br />
</li>
<li>Comprehensive plugin developer guide<br />
</li>
<li>GPU Emulation mode - for using kernel code on the host CPU<br />
</li>
<li>Better GPU memory and thread management<br />
</li>
<li>Improved auto-tuning based on available resources<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Along with the major changes, we have added the following frequently demanded 51 new algorithms:<br />
<ul class="mycode_list"><li>AES Crypt (SHA256)<br />
</li>
<li>Android Backup<br />
</li>
<li>AuthMe sha256<br />
</li>
<li>BitLocker<br />
</li>
<li>BitShares v0.x<br />
</li>
<li>Blockchain, My Wallet, Second Password (SHA256)<br />
</li>
<li>Citrix NetScaler (SHA512)<br />
</li>
<li>DiskCryptor<br />
</li>
<li>Electrum Wallet (Salt-Type 3-5)<br />
</li>
<li>Huawei Router sha1(md5(&#36;pass).&#36;salt)<br />
</li>
<li>Java Object hashCode()<br />
</li>
<li>Kerberos 5 Pre-Auth etype 17 (AES128-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 Pre-Auth etype 18 (AES256-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 TGS-REP etype 17 (AES128-CTS-HMAC-SHA1-96)<br />
</li>
<li>Kerberos 5 TGS-REP etype 18 (AES256-CTS-HMAC-SHA1-96)<br />
</li>
<li>MultiBit Classic .key (MD5)<br />
</li>
<li>MultiBit HD (scrypt)<br />
</li>
<li>MySQL &#36;A&#36; (sha256crypt)<br />
</li>
<li>Open Document Format (ODF) 1.1 (SHA-1, Blowfish)<br />
</li>
<li>Open Document Format (ODF) 1.2 (SHA-256, AES)<br />
</li>
<li>Oracle Transportation Management (SHA256)<br />
</li>
<li>PKZIP archive encryption<br />
</li>
<li>PKZIP Master Key<br />
</li>
<li>Python passlib pbkdf2-sha1<br />
</li>
<li>Python passlib pbkdf2-sha256<br />
</li>
<li>Python passlib pbkdf2-sha512<br />
</li>
<li>QNX /etc/shadow (MD5)<br />
</li>
<li>QNX /etc/shadow (SHA256)<br />
</li>
<li>QNX /etc/shadow (SHA512)<br />
</li>
<li>RedHat 389-DS LDAP (PBKDF2-HMAC-SHA256)<br />
</li>
<li>Ruby on Rails Restful-Authentication<br />
</li>
<li>SecureZIP AES-128<br />
</li>
<li>SecureZIP AES-192<br />
</li>
<li>SecureZIP AES-256<br />
</li>
<li>SolarWinds Orion<br />
</li>
<li>Telegram Desktop App Passcode (PBKDF2-HMAC-SHA1)<br />
</li>
<li>Telegram Mobile App Passcode (SHA256)<br />
</li>
<li>Web2py pbkdf2-sha512<br />
</li>
<li>WPA-PBKDF2-PMKID+EAPOL<br />
</li>
<li>WPA-PMK-PMKID+EAPOL<br />
</li>
<li>md5(&#36;salt.sha1(&#36;salt.&#36;pass))<br />
</li>
<li>md5(sha1(&#36;pass).md5(&#36;pass).sha1(&#36;pass))<br />
</li>
<li>md5(sha1(&#36;salt).md5(&#36;pass))<br />
</li>
<li>sha1(md5(md5(&#36;pass)))<br />
</li>
<li>sha1(md5(&#36;pass.&#36;salt))<br />
</li>
<li>sha1(md5(&#36;pass).&#36;salt)<br />
</li>
<li>sha1(&#36;salt1.&#36;pass.&#36;salt2)<br />
</li>
<li>sha256(md5(&#36;pass))<br />
</li>
<li>sha256(&#36;salt.&#36;pass.&#36;salt)<br />
</li>
<li>sha256(sha256_bin(&#36;pass))<br />
</li>
<li>sha256(sha256(&#36;pass).&#36;salt)<br />
</li>
</ul>
With so many new hash-modes added, we're happy to announce that we now support over 320 different algorithms!<br />
<br />
<hr class="mycode_hr" />
<br />
And here's a preview of some of the performance improvements:<br />
<ul class="mycode_list"><li>MD5: 8.05%<br />
</li>
<li>NTLM: 13.70%<br />
</li>
<li>Domain Cached Credentials (DCC), MS Cache: 11.91%<br />
</li>
<li>Domain Cached Credentials 2 (DCC2), MS Cache 2: 12.51%<br />
</li>
<li>NetNTLMv1: 15.79%<br />
</li>
<li>NetNTLMv2: 6.98%<br />
</li>
<li>WPA/WPA2: 13.35%<br />
</li>
<li>sha256crypt &#36;5&#36;, SHA256 (Unix): 8.77%<br />
</li>
<li>sha512crypt &#36;6&#36;, SHA512 (Unix): 20.33%<br />
</li>
<li>bcrypt: 45.58%<br />
</li>
<li>IPMI2 RAKP HMAC-SHA1: 20.03%<br />
</li>
<li>SAP CODVN B (BCODE): 32.37%<br />
</li>
<li>Blockchain, My Wallet: 31.00%<br />
</li>
<li>Electrum Wallet (Salt-Type 1-3): 109.46%<br />
</li>
<li>WinZip: 119.43%<br />
</li>
</ul>
For a full list of all improvements, please see here: <a href="https://docs.google.com/spreadsheets/d/1CK02Qm4GLG8clCrqUB1EklhHBk37_WgIHWKMTtOilAY/edit?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...sp=sharing</a><br />
<br />
<hr class="mycode_hr" />
<br />
In addition to these, there are a number of other new features and changes - but in this post, we want to focus mainly on the major changes to keep the release notes to a digestible length. For those interested, the changelog and git history have a more complete list of all changes.<br />
<br />
Most of these changes are aimed at developers. These release notes are intentionally verbose to inform current contributors and developers, as well as to catch the interest of potential future hashcat contributors.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Plugin Interface<br />
<br />
<hr class="mycode_hr" />
<br />
One of the first things you will notice after unpacking the new hashcat version is the new modules folder. We have had modularity in mind for a long time, and have finally managed to implement it: each and every hash type is separated into its own module. This not only makes the code much easier to read, write, and maintain, but it also comes with a very nice new architecture, interface, and added flexibility.<br />
<br />
In essence, this is actually just some overdue refactorization, but it comes with a lot of benefits for developers working on new hash types: it makes it much easier to write new host code (including parsers, decoders, encoders, hooks etc). The hash type code is 100% separated from the core code, meaning that there's no more need to edit the hashcat core sources to add a new hash type. This also enables much easier distribution of custom kernels which have not been pushed to the main repository.<br />
<br />
During more than four months of "conversion" of the old hash types, we designed a new common interface and made all existing hash modes work with this new plugin interface. We even created a new testing framework, and converted all the old testing modules. See the tools/test_modules/ folder for more information.<br />
<br />
A huge thanks to everyone helping to convert hash types and/or tests for these new interfaces. This has not only shown us that the new interface works great and is flexible enough to cover all the different needs from the different modules, but it also shows that contributors are able to easily write modular code.<br />
<br />
The new fully modularized hash-type integration makes the hash-type-specific code more compact and encapsulated, but also maintains and even adds flexibility. For instance, it is now possible to easily add hash-mode specific JiT (just-in-time) compiler flags which are used at kernel compilation runtime, or easily mark hash-mode specific unstable warnings on some specific setups (for instance depending on driver and hardware). One could easily add new restriction and limitations directly to the module, without cluttering other parts of hashcat (to avoid "spaghetti code" and "special cases" everywhere).<br />
<br />
There is a lot to say about this new architecture that we've designed and we could go into much further detail, but we will do our best to not go too far here. Fortunately, for everybody interested, we also wrote a hashcat plugin interface guide for developers. This guide is the first official "how to add a new hash type" document, it already consisting of almost 20,000 words. It does not cover every detail, but it gives you everything you need to get started adding your own hash type. <br />
<br />
Be prepared, because reading will take a bit of time. You can find it here: <a href="https://github.com/hashcat/hashcat/blob/master/docs/hashcat-plugin-development-guide.md" target="_blank" rel="noopener" class="mycode_url">https://github.com/hashcat/hashcat/blob/...t-guide.md</a><br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Backend Interface<br />
<br />
<hr class="mycode_hr" />
<br />
Similar to the Plugin Interface feature, it took us quite some time and effort to refactor how we deal with supporting different compute devices in hashcat. <br />
<br />
As you will notice, we have changed many command line parameters to --backend-* as replacements of the old --opencl-* parameters. The reason for this is that hashcat now has a more flexible architecture for how we deal with different backends (like CUDA/OpenCL etc). With this system, we can add additional backends in an elegant way whenever we may need to in the future.<br />
<br />
The system is designed in such a way that backend-specific code is abstracted away from other operations (like loading the kernel source code etc) and uses a common interface which makes the code much more readable and easy to use.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: CUDA Support<br />
<br />
<hr class="mycode_hr" />
<br />
This is basically an "application" of the new Backend Interface feature. With the new architecture for hashcat backends, we were able to start supporting CUDA for NVIDIA devices. By NVIDIA devices, we mean any of their compute devices that support CUDA, not just discrete GPUs! This enables hashcat to run on chips such as the NVIDIA Jetson or NVIDIA Xavier. This also enables us to utilize CUDA on platforms where NVIDIA does not release a driver capable of OpenCL, including ARM platforms and IBM POWER9 platforms.<br />
<br />
There are several other advantages that CUDA has over OpenCL on NVIDIA devices, but the most important one is that the entire amount of GPU memory in a single block allocation can now be unlocked by the user by installing the CUDA Toolkit.<br />
<br />
We recommend (at the time of this writing) installing the CUDA Toolkit without the NVIDIA driver it ships with and install the latest/recommended driver from nvidia.com instead. Hashcat will actually warn you if you have an NVIDIA device, but "only" use the OpenCL driver, because you <span style="font-weight: bold;" class="mycode_b">should</span> install the CUDA Toolkit for CUDA-supported devices. This step is mandatory if you want to use CUDA instead of OpenCL backend because there's no JiT compiler for CUDA the way how it comes already built-in for the OpenCL with the NVIDIA drivers.<br />
<br />
Hashcat will list all the devices (CUDA devices in addition to OpenCL devices) with --backend-info (short: -I) and you can easily select the devices you want with --backend-devices (short: -d). Of course, hashcat prefers the "CUDA devices" if available! (and for the curious reader: no, you can't actually use OpenCL and CUDA at the same time for the same device in hashcat - we call this an alias. The speed will NOT double this way <img src="https://hashcat.net/forum/images/smilies/tongue.gif" alt="Tongue" title="Tongue" class="smilie smilie_5" />)<br />
<br />
One of the biggest advantages of CUDA compared to OpenCL is the full use of shared memory (sometimes also called Local Memory). In OpenCL, there's a minimum of 1 byte reserved by OpenCL which has bigger implications than may be apparent at first. For example, most NVIDIA cards have 48kb of shared memory. To efficiently compute bcrypt, each thread requires 4k of this shared memory pool. This means that on CUDA we are able to use 12 threads on bcrypt instead of just 11 threads with OpenCL. This and other optimizations are the reason we improved the performance of bcrypt by 46.90%.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Emulation Mode<br />
<br />
<hr class="mycode_hr" />
<br />
This feature is basically introducing a very nice way to use kernel code within modules or host code. We came up with this strategy for the following reasons:<br />
<ul class="mycode_list"><li>For complex kernels with lots of code, it's easier to debug the code on the host side as a standalone project. This also saves long startup times and increases development speed.<br />
</li>
<li>Reuse complex kernel code from within the module. A good example is WPA EAPOL/PMKID, where we compute the last steps in the module parser with the goal to find already cracked hashes in the potfile. This saves maintaining the same code on two ends.<br />
</li>
<li>Reuse kernel code in order to precompute values in parsers. As a very easy example, consider some "flawed" algo that uses md5(&#36;salt) within the algorithm. In this case we could just precompute this MD5 in the host. <br />
</li>
</ul>
The "emulated" code is basically shared between the OpenCL/CUDA code and the host code and can, for instance, be directly included by the module (for instance src/modules/module_19100.c includes emu_inc_hash_sha256.h). This way we also avoid duplicated code and guarantee that the host code also uses the most optimized code.<br />
<br />
It is actually quite easy to use for developers. For an example of some basic hashing algos being used directly in modules with this new emulation mode, just take a look at existing modules (like -m 19100 as mentioned above) or glance at the include/emu_*.h files.<br />
<br />
<hr class="mycode_hr" />
<br />
Major Feature: Memory/Thread Management<br />
<br />
<hr class="mycode_hr" />
<br />
Lastly, a feature that is less directed at hashcat devs/contributors, but is still interesting and very important when it comes to performance: improvement in memory and thread management to reach maximum performance.<br />
<br />
Hashcat 6.0.0 introduces a new way that threads and device memory (VRAM) are used and optimized: with the addition of a new automatic workload tuner, we try to guarantee maximum performance depending on the available memory, hash type, attack mode, amplifiers (e.g. rules) etc. We basically changed the thread management from a "native" thread count for GPU to maximum possible threads. We've also added a command line parameter: --kernel-threads (short: -T) if you want to play with this and set the amount of threads manually.<br />
<br />
This obviously comes with a very nice performance gain depending on hash type, attack mode etc.<br />
<br />
<hr class="mycode_hr" />
<br />
Changelog Features:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Refactored hash-mode integration and replaced it with a fully modularized plugin interface<br />
</li>
<li>Converted all existing hardwired hash-modes to hashcat plugins<br />
</li>
<li>Added comprehensive plugin developer guide on adding new/custom hash-modes to hashcat<br />
</li>
<li>Refactored compute backend interface to allow adding compute API other than OpenCL<br />
</li>
<li>Added CUDA as a new compute backend (enables hashcat to run on NVIDIA Jetson, IBM POWER9 w/ Nvidia V100, etc.)<br />
</li>
<li>Support automatic use of all available GPU memory when using CUDA backend<br />
</li>
<li>Support automatic use of all available CPU cores for hash-mode-specific hooks<br />
</li>
<li>Support on-the-fly loading of compressed wordlists in zip and gzip format<br />
</li>
<li>Support deflate decompression for the 7-Zip hash-mode using zlib hook<br />
</li>
<li>Added additional documentation on hashcat brain, slow-candidate interface and keyboard-layout mapping features<br />
</li>
<li>Keep output of --show and --left in the original ordering of the input hash file<br />
</li>
<li>Improved performance of many hash-modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog fixed Bugs:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Fixed buffer overflow in build_plain() function<br />
</li>
<li>Fixed buffer overflow in mp_add_cs_buf() function<br />
</li>
<li>Fixed calculation of brain-session ID - only the first hash of the hashset was taken into account<br />
</li>
<li>Fixed cleanup of password candidate buffers on GPU as set from autotune when -n parameter was used<br />
</li>
<li>Fixed copy/paste error leading to invalid "Integer overflow detected in keyspace of mask" in attack-mode 6 and 7<br />
</li>
<li>Fixed cracking multiple Office hashes (modes 9500, 9600) if hashes shared the same salt<br />
</li>
<li>Fixed cracking of Blockchain, My Wallet (V1 and V2) hashes when testing decrypted data in unexpected format<br />
</li>
<li>Fixed cracking of Cisco-PIX and Cisco-ASA MD5 passwords in mask-attack mode when mask &gt; length 16<br />
</li>
<li>Fixed cracking of DNSSEC (NSEC3) hashes by replacing all dots in the passwords with lengths<br />
</li>
<li>Fixed cracking of Electrum Wallet Salt-Type 2 hashes<br />
</li>
<li>Fixed cracking of NetNTLMv1 passwords in mask-attack mode when mask &gt; length 16 (optimized kernels only)<br />
</li>
<li>Fixed cracking of RAR3-hp hashes with pure kernel for passwords longer than 28 bytes<br />
</li>
<li>Fixed cracking of VeraCrypt Streebog-512 hashes (CPU only)<br />
</li>
<li>Fixed cracking raw Streebog-HMAC 256 and 512 hashes for passwords of length &gt;= 64<br />
</li>
<li>Fixed cracking raw Whirlpool hashes cracking for passwords of length &gt;= 32<br />
</li>
<li>Fixed incorrect progress-only result in a special race condition<br />
</li>
<li>Fixed invalid call of mp_css_utf16le_expand()/mp_css_utf16be_expand() in slow-candidate sessions<br />
</li>
<li>Fixed invalid password truncation in attack-mode 1 when the final password is longer than 32 characters<br />
</li>
<li>Fixed invalid use of --hex-wordlist if encoded wordlist string is larger than length 256<br />
</li>
<li>Fixed maximum password length limit which was announced as 256 but was actually 255<br />
</li>
<li>Fixed out-of-boundary read in pure kernel rule engine rule 'p' when parameter was set to 2 or higher<br />
</li>
<li>Fixed out-of-boundary write to decrypted[] in DPAPI masterkey file v1 kernel<br />
</li>
<li>Fixed output of IKE PSK (mode 5300 and 5400) hashes to use separators in the correct position<br />
</li>
<li>Fixed output password of "e" rule in pure and CPU rule engine when separator character is also the first letter<br />
</li>
<li>Fixed problem with usage of hexadecimal notation (\x00-\xff) within rules<br />
</li>
<li>Fixed race condition in maskfile mode by using a dedicated flag for restore execution<br />
</li>
<li>Fixed some memory leaks when hashcat is shutting down due to some file error<br />
</li>
<li>Fixed some memory leaks when mask-files are used in optimized mode<br />
</li>
<li>Fixed --status-json to correctly escape certain characters in hashes<br />
</li>
<li>Fixed the 7-Zip parser to allow the entire supported range of encrypted and decrypted data lengths<br />
</li>
<li>Fixed the validation of the --brain-client-features command line argument (only values 1, 2 or 3 are allowed)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Improvements:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Bitcoin Wallet: Be more user friendly by allowing a larger data range for ckey and public_key<br />
</li>
<li>Brain: Added new parameter --brain-server-timer to specify seconds between scheduled backups<br />
</li>
<li>Building: Fix for library compilation failure due to multiple defenition of sbob_xx64()<br />
</li>
<li>Cracking bcrypt and Password Safe v2: Use feedback from the compute API backend to dynamically calculate optimal thread count<br />
</li>
<li>Dictstat: On Windows, the st_ino attribute in the stat struct is not set, which can lead to invalid cache hits. Added the filename to the database entry.<br />
</li>
<li>Documents: Added README on how to build hashcat on Cygwin, MSYS2 and WSL<br />
</li>
<li>File handling: Print a truncation warning when an oversized line is detected<br />
</li>
<li>My Wallet: Added additional plaintext pattern used in newer versions<br />
</li>
<li>Office cracking: Support hash format with second block data for 40-bit oldoffice files (eliminates false positives)<br />
</li>
<li>OpenCL Runtime: Added a warning if OpenCL runtime NEO, Beignet, POCL (v1.4 or older) or MESA is detected, and skip associated devices (override with --force)<br />
</li>
<li>OpenCL Runtime: Allow the kernel to access post-48k shared memory region on CUDA. Requires both module and kernel preparation<br />
</li>
<li>OpenCL Runtime: Disable OpenCL kernel cache on Apple for Intel CPU (throws CL_BUILD_PROGRAM_FAILURE for no reason)<br />
</li>
<li>OpenCL Runtime: Do not run shared- or constant-memory size checks if their memory type is of type global memory (typically CPU)<br />
</li>
<li>OpenCL Runtime: Improve ROCm detection and make sure to not confuse with recent AMDGPU drivers<br />
</li>
<li>OpenCL Runtime: Not using amd_bytealign (amd_bitalign is fine) on AMDGPU driver drastically reduces JiT segfaults<br />
</li>
<li>OpenCL Runtime: Unlocked maximum thread count for NVIDIA GPU<br />
</li>
<li>OpenCL Runtime: Update unstable mode warnings for Apple and AMDGPU drivers<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler error on AMDGPU driver compiling WPA-EAPOL-PBKDF2 OpenCL kernel<br />
</li>
<li>OpenCL Runtime: Workaround JiT compiler error on ROCm 2.3 driver if the 'inline' keyword is used in function declaration<br />
</li>
<li>OpenCL Runtime: Workaround memory allocation error on AMD driver on Windows leading to CL_MEM_OBJECT_ALLOCATION_FAILURE<br />
</li>
<li>OpenCL Runtime: Removed some workarounds by calling chdir() to specific folders on startup<br />
</li>
<li>Outfile: Added new system to specify the outfile format, the new --outfile-format now also supports timestamps<br />
</li>
<li>Startup Checks: Improved the pidfile check: Do not just check for existing PID, but also check executable filename<br />
</li>
<li>Startup Checks: Prevent the user from modifying options which are overwritten automatically in benchmark mode<br />
</li>
<li>Startup Screen: Add extra warning when using --force<br />
</li>
<li>Startup Screen: Add extra warning when using --keep-guessing<br />
</li>
<li>Startup Screen: Provide an estimate of host memory required for the requested attack<br />
</li>
<li>Status Screen: Added brain status for all compute devices<br />
</li>
<li>Status Screen: Added remaining counts and changed recovered count logic<br />
</li>
<li>Status Screen: Added --status-json flag for easier machine reading of hashcat status output<br />
</li>
<li>Tab Completion: Allow using "make install" version of hashcat<br />
</li>
<li>Tuning Database: Updated hashcat.hctune with new models and refreshed vector width values<br />
</li>
<li>VeraCrypt: Added support for VeraCrypt PIM brute-force, replaced --veracrypt-pim with --veracrypt-pim-start and --veracrypt-pim-stop<br />
</li>
<li>WipZip cracking: Added two byte early reject, resulting in higher cracking speed<br />
</li>
<li>WPA/WPA2 cracking: In the potfile, replace password with PMK in order to detect already cracked networks across all WPA modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Changelog Technical:<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>Backend Interface: Added new options --backend-ignore-cuda and --backend-ingore-opencl to prevent CUDA and/or OpenCL API from being used<br />
</li>
<li>Binary Distribution: Removed 32-bit binary executables<br />
</li>
<li>Building: On macOS, switch from ar to /usr/bin/ar to improve building compatibility<br />
</li>
<li>Building: Skipping Travis/Appveyor build for non-code changes<br />
</li>
<li>Codebase: Cleanup of many unused rc_* variables<br />
</li>
<li>Codebase: Fixed some printf() format arguments<br />
</li>
<li>Codebase: Fixed some type casting to avoid truncLongCastAssignment warnings<br />
</li>
<li>Codebase: Moved hc_* file functions from shared.c to filehandling.c<br />
</li>
<li>Codebase: Ran through a bunch of clang-tidy checkers and updated code accordingly<br />
</li>
<li>Codebase: Remove redundant calls to fclose()<br />
</li>
<li>Dependencies: Updated LZMA-Headers from 18.05 to 19.00<br />
</li>
<li>Dependencies: Updated OpenCL-Headers to latest version from GitHub master repository<br />
</li>
<li>Hash-Mode 12500 (RAR3-hp): Allow cracking of passwords up to length 64<br />
</li>
<li>Hash-mode 1460 (HMAC-SHA256 (key = &#36;salt)): Allow up to 64 byte of salt<br />
</li>
<li>Hash-Mode 1680x (WPA-PMKID) specific: Changed separator character from '*' to ':'<br />
</li>
<li>Hash-Mode 8300 (DNSSEC (NSEC3)) specific: Allow empty salt<br />
</li>
<li>Keep Guessing: No longer automatically activate --keep-guessing for modes 9720, 9820, 14900 and 18100<br />
</li>
<li>Keep Guessing: No longer mark hashes as cracked/removed when in potfile<br />
</li>
<li>Kernel Cache: Reactivate OpenCL runtime specific kernel caches<br />
</li>
<li>Kernel Compile: Removed -cl-std= from all kernel build options since we're compatible to all OpenCL versions<br />
</li>
<li>OpenCL Kernels: Fix OpenCL compiler warning on double precision constants<br />
</li>
<li>OpenCL Kernels: Moved "gpu_decompress", "gpu_memset" and "gpu_atinit" into shared.cl in order to reduce compile time<br />
</li>
<li>OpenCL Options: Removed --opencl-platforms filter in order to force backend device numbers to stay constant<br />
</li>
<li>OpenCL Options: Set --spin-damp to 0 (disabled) by default. With the CUDA backend this workaround became deprecated<br />
</li>
<li>Parsers: switched from strtok() to strtok_r() for thread safety<br />
</li>
<li>Requirements: Add new requirement for NVIDIA GPU: CUDA Toolkit (9.0 or later)<br />
</li>
<li>Requirements: Update runtime check for minimum NVIDIA driver version from 367.x to 440.64 or later<br />
</li>
<li>Test Script: Switched from /bin/bash to generic /bin/sh and updated code accordingly<br />
</li>
</ul>
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v5.1.0]]></title>
			<link>https://hashcat.net/forum/thread-7983.html</link>
			<pubDate>Sun, 02 Dec 2018 11:06:49 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-7983.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v5.1.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added pure kernels for hash-mode 11700 (Streebog-256)<br />
</li>
<li>Added pure kernels for hash-mode 11800 (Streebog-512)<br />
</li>
<li>Added hash-mode 11750 (HMAC-Streebog-256 (key = &#36;pass), big-endian)<br />
</li>
<li>Added hash-mode 11760 (HMAC-Streebog-256 (key = &#36;salt), big-endian)<br />
</li>
<li>Added hash-mode 11850 (HMAC-Streebog-512 (key = &#36;pass), big-endian)<br />
</li>
<li>Added hash-mode 11860 (HMAC-Streebog-512 (key = &#36;salt), big-endian)<br />
</li>
<li>Added hash-mode 13771 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 512 bit)<br />
</li>
<li>Added hash-mode 13772 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1024 bit)<br />
</li>
<li>Added hash-mode 13773 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1536 bit)<br />
</li>
<li>Added hash-mode 18200 (Kerberos 5 AS-REP etype 23)<br />
</li>
<li>Added hash-mode 18300 (Apple File System (APFS))<br />
</li>
<li>Added Kuznyechik cipher and cascades support for VeraCrypt kernels<br />
</li>
<li>Added Camellia cipher and cascades support for VeraCrypt kernels<br />
</li>
</ul>
Thanks to Naufragous for contributing the VeraCrypt extensions! We're now VeraCrypt feature complete.<br />
<br />
<hr class="mycode_hr" />
<br />
New Features:<br />
<ul class="mycode_list"><li>Added support for using --stdout in brain-client mode<br />
</li>
<li>Added new option --stdin-timeout-abort, to set how long hashcat should wait for stdin input before aborting<br />
</li>
<li>Added new option --kernel-threads to manually override the automatically-calculated number of threads<br />
</li>
<li>Added new option --keyboard-layout-mapping to map users keyboard layout, required to crack TC/VC system boot volumes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Some notes about the --keyboard-layout-mapping feature:<br />
<br />
This new configuration item was added to handle a special TrueCrypt and VeraCrypt "feature" which is automatically active during the setup of encryption for a system partition or an entire system drive. Due to BIOS requirements, the user's keyboard layout is always set to the US keyboard layout during the pre-boot stage (no matter which layout is actually in use). In other words, in the pre-boot stage, when TC/VC asks the user to enter the password, the layout is actually set to the US keyboard layout.<br />
<br />
To avoid conflicts with the real keyboard layout configured in the OS, both TC and VC have a little trick: they set the OS keyboard layout to US keyboard layout while the password prompt window is opened. You can actually verify this in the language task bar while the password prompt window is open. It will switch from whatever is configured to English, and after the window is closed, the original keyboard layout is restored.<br />
<br />
This has a serious impact on cracking the password. For example, my German keyboard layout is a "QWERTZ" keyboard layout. The US keyboard, however used a "QWERTY" layout. The difference is that the position of the "y" and "z" letter is exchanged. If it was just that, this wouldn't be much of a problem - but almost all the special symbols are mapped very differently. (I won't go into the details; you might want to compare it yourself for fun.)<br />
<br />
And when it comes to non-Latin based languages, this behaviour gets completely out of control. Just one example: If the user enters the password <span style="font-weight: bold;" class="mycode_b">بين التخصصات</span> (interdisciplinary) on an Arabic keyboard, the password we need to guess is: <span style="font-weight: bold;" class="mycode_b">fdk hgjowwhj[g</span>.<br />
<br />
To deal with all of this, a hashcat user needs to know exactly which keyboard was enabled when the password was entered into the password window during setup. For German, I've added an example keyboard layout to the newly created folder "layouts", which now ships with the binary and on GitHub master. For instance, if you know a German keyboard was used, you can now add "--keyboard-layout-mapping layouts/de.hckmap" to the commandline.<br />
<br />
Unfortunately, since I don't own all of the existing keyboards, it will be necessary for hashcat users to contribute the rest of the missing mapping tables - ideally, as a GitHub PR. Almost every language I know has special keyboard layouts. There's even a difference between the UK and US layouts.<br />
<br />
Here's how you can help. To create a language-specific mapping table, open a text editor, and press every key on the keyboard, starting from the top left to the top right. Press Enter after every key. Use only keys which represent a real character, and ignore control keys such as Backspace, Caps Lock, etc. Then move to the next row below and repeat the process from the left to the right, and so on until you reach the space character. At that point, repeat exactly the same sequence, but with Shift pressed. When done, add a Tab after each character (Tab is used as separator character). Then switch the keyboard layout to English and repeat the entire process exactly in the same order, adding each character after the tab character. Hashcat fully supports all multibyte characters up to 32 bits on both sides of the mapping table (even if the right side side will be always a single byte character). As an example, see the layouts/de.hckmap file.<br />
<br />
Note that when it comes to Alt/AltGr, this behavior is exploitable. TC/VC does not accept those modifier keys. If a user uses AltGr while entering the password, a window appears that tells the user that the use of this key is not allowed. For instance, on my German keyboard layout, I need to use AltGr+q to get the "@" character. As a consequence of this, we know that the TC/VC password cannot include any of the characters ("@", "[", "]", "\", "€", "|", "{", "}", "~") if the user was using a German keyboard to enter the password.<br />
<br />
At the same time, we can guarantee that "@" will never be listed on the left side of the mapping table - because the only characters that can appear there are the ones that are are reachable only without any modifier or by using shift (but not AltGr). If we combine these concepts, we could add some code to reject all passwords which contain at least one character not listed in a mapping table. This is not yet implemented - but I'll add it if hashcat users agree that there is value in it.<br />
<br />
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>OpenCL Devices: Add support for up to 64 OpenCL devices per system<br />
</li>
<li>OpenCL Platforms: Add support for up to 64 OpenCL platforms per system<br />
</li>
<li>OpenCL Runtime: Use our own yielding technique for synchronizing rather than vendor specific<br />
</li>
<li>Startup: Show OpenCL runtime initialization message (per device)<br />
</li>
<li>xxHash: Added support for using the version provided by the OS/distribution<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed automated calculation of brain-session when not using all hashes in the hashlist<br />
</li>
<li>Fixed calculation of brain-attack if a given wordlist has zero size<br />
</li>
<li>Fixed checking the length of the last token in a hash if it was given the attribute TOKEN_ATTR_FIXED_LENGTH<br />
</li>
<li>Fixed endianness and invalid separator character in outfile format for hash-mode 16801 (WPA-PMKID-PMK)<br />
</li>
<li>Fixed ignoring --brain-client-features configuration when brain server has attack-position information from a previous run<br />
</li>
<li>Fixed invalid hardware monitor detection in benchmark mode<br />
</li>
<li>Fixed invalid warnings about throttling when --hwmon-disable was used<br />
</li>
<li>Fixed missing call to WSACleanup() to cleanly shutdown windows sockets system<br />
</li>
<li>Fixed missing call to WSAStartup() and client indexing in order to start the brain server on Windows<br />
</li>
<li>Fixed out-of-boundary read in DPAPI masterkey file v2 OpenCL kernel<br />
</li>
<li>Fixed out-of-bounds write in short-term memory of the brain server<br />
</li>
<li>Fixed output of --speed-only and --progress-only when fast hashes are used in combination with --slow-candidates<br />
</li>
<li>Fixed selection of OpenCL devices (-d) if there's more than 32 OpenCL devices installed<br />
</li>
<li>Fixed status output of progress value when -S and -l are used in combination<br />
</li>
<li>Fixed thread count maximum for pure kernels in straight attack mode<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Brain: Set --brain-client-features default from 3 to 2<br />
</li>
<li>Dependencies: Added xxHash and OpenCL-Headers to deps/ in order to allow building hashcat from GitHub source release package<br />
</li>
<li>Dependencies: Removed gitmodules xxHash and OpenCL-Headers<br />
</li>
<li>Keymaps: Added hashcat keyboard mapping us.hckmap (can be used as template)<br />
</li>
<li>Keymaps: Added hashcat keyboard mapping de.hckmap<br />
</li>
<li>Hardware Monitor: Renamed --gpu-temp-abort to --hwmon-temp-abort<br />
</li>
<li>Hardware Monitor: Renamed --gpu-temp-disable to --hwmon-disable<br />
</li>
<li>Memory: Limit maximum host memory allocation depending on bitness<br />
</li>
<li>Memory: Reduced default maximum bitmap size from 24 to 18 and give a notice to use --bitmap-max to restore<br />
</li>
<li>Parameter: Rename --nvidia-spin-damp to --spin-damp (now accessible for all devices)<br />
</li>
<li>Pidfile: Treat a corrupted pidfile like a not existing pidfile<br />
</li>
<li>OpenCL Device: Do a real query on OpenCL local memory type instead of just assuming it<br />
</li>
<li>OpenCL Runtime: Disable auto-vectorization for Intel OpenCL runtime to workaround hanging JiT since version 18.1.0.013<br />
</li>
<li>Tests: Added hash-mode 11700 (Streebog-256)<br />
</li>
<li>Tests: Added hash-mode 11750 (HMAC-Streebog-256 (key = &#36;pass), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11760 (HMAC-Streebog-256 (key = &#36;salt), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11800 (Streebog-512)<br />
</li>
<li>Tests: Added hash-mode 11850 (HMAC-Streebog-512 (key = &#36;pass), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11860 (HMAC-Streebog-512 (key = &#36;salt), big-endian)<br />
</li>
<li>Tests: Added hash-mode 13711 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13712 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13713 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13721 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13722 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13723 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13731 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13732 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13733 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13751 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13752 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13753 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13771 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13772 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13773 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1536 bit)<br />
</li>
<li>Tests: Added VeraCrypt containers for Kuznyechik cipher and cascades<br />
</li>
<li>Tests: Added VeraCrypt containers for Camellia cipher and cascades<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v5.1.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added pure kernels for hash-mode 11700 (Streebog-256)<br />
</li>
<li>Added pure kernels for hash-mode 11800 (Streebog-512)<br />
</li>
<li>Added hash-mode 11750 (HMAC-Streebog-256 (key = &#36;pass), big-endian)<br />
</li>
<li>Added hash-mode 11760 (HMAC-Streebog-256 (key = &#36;salt), big-endian)<br />
</li>
<li>Added hash-mode 11850 (HMAC-Streebog-512 (key = &#36;pass), big-endian)<br />
</li>
<li>Added hash-mode 11860 (HMAC-Streebog-512 (key = &#36;salt), big-endian)<br />
</li>
<li>Added hash-mode 13771 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 512 bit)<br />
</li>
<li>Added hash-mode 13772 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1024 bit)<br />
</li>
<li>Added hash-mode 13773 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1536 bit)<br />
</li>
<li>Added hash-mode 18200 (Kerberos 5 AS-REP etype 23)<br />
</li>
<li>Added hash-mode 18300 (Apple File System (APFS))<br />
</li>
<li>Added Kuznyechik cipher and cascades support for VeraCrypt kernels<br />
</li>
<li>Added Camellia cipher and cascades support for VeraCrypt kernels<br />
</li>
</ul>
Thanks to Naufragous for contributing the VeraCrypt extensions! We're now VeraCrypt feature complete.<br />
<br />
<hr class="mycode_hr" />
<br />
New Features:<br />
<ul class="mycode_list"><li>Added support for using --stdout in brain-client mode<br />
</li>
<li>Added new option --stdin-timeout-abort, to set how long hashcat should wait for stdin input before aborting<br />
</li>
<li>Added new option --kernel-threads to manually override the automatically-calculated number of threads<br />
</li>
<li>Added new option --keyboard-layout-mapping to map users keyboard layout, required to crack TC/VC system boot volumes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Some notes about the --keyboard-layout-mapping feature:<br />
<br />
This new configuration item was added to handle a special TrueCrypt and VeraCrypt "feature" which is automatically active during the setup of encryption for a system partition or an entire system drive. Due to BIOS requirements, the user's keyboard layout is always set to the US keyboard layout during the pre-boot stage (no matter which layout is actually in use). In other words, in the pre-boot stage, when TC/VC asks the user to enter the password, the layout is actually set to the US keyboard layout.<br />
<br />
To avoid conflicts with the real keyboard layout configured in the OS, both TC and VC have a little trick: they set the OS keyboard layout to US keyboard layout while the password prompt window is opened. You can actually verify this in the language task bar while the password prompt window is open. It will switch from whatever is configured to English, and after the window is closed, the original keyboard layout is restored.<br />
<br />
This has a serious impact on cracking the password. For example, my German keyboard layout is a "QWERTZ" keyboard layout. The US keyboard, however used a "QWERTY" layout. The difference is that the position of the "y" and "z" letter is exchanged. If it was just that, this wouldn't be much of a problem - but almost all the special symbols are mapped very differently. (I won't go into the details; you might want to compare it yourself for fun.)<br />
<br />
And when it comes to non-Latin based languages, this behaviour gets completely out of control. Just one example: If the user enters the password <span style="font-weight: bold;" class="mycode_b">بين التخصصات</span> (interdisciplinary) on an Arabic keyboard, the password we need to guess is: <span style="font-weight: bold;" class="mycode_b">fdk hgjowwhj[g</span>.<br />
<br />
To deal with all of this, a hashcat user needs to know exactly which keyboard was enabled when the password was entered into the password window during setup. For German, I've added an example keyboard layout to the newly created folder "layouts", which now ships with the binary and on GitHub master. For instance, if you know a German keyboard was used, you can now add "--keyboard-layout-mapping layouts/de.hckmap" to the commandline.<br />
<br />
Unfortunately, since I don't own all of the existing keyboards, it will be necessary for hashcat users to contribute the rest of the missing mapping tables - ideally, as a GitHub PR. Almost every language I know has special keyboard layouts. There's even a difference between the UK and US layouts.<br />
<br />
Here's how you can help. To create a language-specific mapping table, open a text editor, and press every key on the keyboard, starting from the top left to the top right. Press Enter after every key. Use only keys which represent a real character, and ignore control keys such as Backspace, Caps Lock, etc. Then move to the next row below and repeat the process from the left to the right, and so on until you reach the space character. At that point, repeat exactly the same sequence, but with Shift pressed. When done, add a Tab after each character (Tab is used as separator character). Then switch the keyboard layout to English and repeat the entire process exactly in the same order, adding each character after the tab character. Hashcat fully supports all multibyte characters up to 32 bits on both sides of the mapping table (even if the right side side will be always a single byte character). As an example, see the layouts/de.hckmap file.<br />
<br />
Note that when it comes to Alt/AltGr, this behavior is exploitable. TC/VC does not accept those modifier keys. If a user uses AltGr while entering the password, a window appears that tells the user that the use of this key is not allowed. For instance, on my German keyboard layout, I need to use AltGr+q to get the "@" character. As a consequence of this, we know that the TC/VC password cannot include any of the characters ("@", "[", "]", "\", "€", "|", "{", "}", "~") if the user was using a German keyboard to enter the password.<br />
<br />
At the same time, we can guarantee that "@" will never be listed on the left side of the mapping table - because the only characters that can appear there are the ones that are are reachable only without any modifier or by using shift (but not AltGr). If we combine these concepts, we could add some code to reject all passwords which contain at least one character not listed in a mapping table. This is not yet implemented - but I'll add it if hashcat users agree that there is value in it.<br />
<br />
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>OpenCL Devices: Add support for up to 64 OpenCL devices per system<br />
</li>
<li>OpenCL Platforms: Add support for up to 64 OpenCL platforms per system<br />
</li>
<li>OpenCL Runtime: Use our own yielding technique for synchronizing rather than vendor specific<br />
</li>
<li>Startup: Show OpenCL runtime initialization message (per device)<br />
</li>
<li>xxHash: Added support for using the version provided by the OS/distribution<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed automated calculation of brain-session when not using all hashes in the hashlist<br />
</li>
<li>Fixed calculation of brain-attack if a given wordlist has zero size<br />
</li>
<li>Fixed checking the length of the last token in a hash if it was given the attribute TOKEN_ATTR_FIXED_LENGTH<br />
</li>
<li>Fixed endianness and invalid separator character in outfile format for hash-mode 16801 (WPA-PMKID-PMK)<br />
</li>
<li>Fixed ignoring --brain-client-features configuration when brain server has attack-position information from a previous run<br />
</li>
<li>Fixed invalid hardware monitor detection in benchmark mode<br />
</li>
<li>Fixed invalid warnings about throttling when --hwmon-disable was used<br />
</li>
<li>Fixed missing call to WSACleanup() to cleanly shutdown windows sockets system<br />
</li>
<li>Fixed missing call to WSAStartup() and client indexing in order to start the brain server on Windows<br />
</li>
<li>Fixed out-of-boundary read in DPAPI masterkey file v2 OpenCL kernel<br />
</li>
<li>Fixed out-of-bounds write in short-term memory of the brain server<br />
</li>
<li>Fixed output of --speed-only and --progress-only when fast hashes are used in combination with --slow-candidates<br />
</li>
<li>Fixed selection of OpenCL devices (-d) if there's more than 32 OpenCL devices installed<br />
</li>
<li>Fixed status output of progress value when -S and -l are used in combination<br />
</li>
<li>Fixed thread count maximum for pure kernels in straight attack mode<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Brain: Set --brain-client-features default from 3 to 2<br />
</li>
<li>Dependencies: Added xxHash and OpenCL-Headers to deps/ in order to allow building hashcat from GitHub source release package<br />
</li>
<li>Dependencies: Removed gitmodules xxHash and OpenCL-Headers<br />
</li>
<li>Keymaps: Added hashcat keyboard mapping us.hckmap (can be used as template)<br />
</li>
<li>Keymaps: Added hashcat keyboard mapping de.hckmap<br />
</li>
<li>Hardware Monitor: Renamed --gpu-temp-abort to --hwmon-temp-abort<br />
</li>
<li>Hardware Monitor: Renamed --gpu-temp-disable to --hwmon-disable<br />
</li>
<li>Memory: Limit maximum host memory allocation depending on bitness<br />
</li>
<li>Memory: Reduced default maximum bitmap size from 24 to 18 and give a notice to use --bitmap-max to restore<br />
</li>
<li>Parameter: Rename --nvidia-spin-damp to --spin-damp (now accessible for all devices)<br />
</li>
<li>Pidfile: Treat a corrupted pidfile like a not existing pidfile<br />
</li>
<li>OpenCL Device: Do a real query on OpenCL local memory type instead of just assuming it<br />
</li>
<li>OpenCL Runtime: Disable auto-vectorization for Intel OpenCL runtime to workaround hanging JiT since version 18.1.0.013<br />
</li>
<li>Tests: Added hash-mode 11700 (Streebog-256)<br />
</li>
<li>Tests: Added hash-mode 11750 (HMAC-Streebog-256 (key = &#36;pass), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11760 (HMAC-Streebog-256 (key = &#36;salt), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11800 (Streebog-512)<br />
</li>
<li>Tests: Added hash-mode 11850 (HMAC-Streebog-512 (key = &#36;pass), big-endian)<br />
</li>
<li>Tests: Added hash-mode 11860 (HMAC-Streebog-512 (key = &#36;salt), big-endian)<br />
</li>
<li>Tests: Added hash-mode 13711 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13712 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13713 (VeraCrypt PBKDF2-HMAC-RIPEMD160 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13721 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13722 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13723 (VeraCrypt PBKDF2-HMAC-SHA512 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13731 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13732 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13733 (VeraCrypt PBKDF2-HMAC-Whirlpool + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13751 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13752 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13753 (VeraCrypt PBKDF2-HMAC-SHA256 + XTS 1536 bit)<br />
</li>
<li>Tests: Added hash-mode 13771 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 512 bit)<br />
</li>
<li>Tests: Added hash-mode 13772 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1024 bit)<br />
</li>
<li>Tests: Added hash-mode 13773 (VeraCrypt PBKDF2-HMAC-Streebog-512 + XTS 1536 bit)<br />
</li>
<li>Tests: Added VeraCrypt containers for Kuznyechik cipher and cascades<br />
</li>
<li>Tests: Added VeraCrypt containers for Camellia cipher and cascades<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v5.0.0]]></title>
			<link>https://hashcat.net/forum/thread-7903.html</link>
			<pubDate>Sun, 28 Oct 2018 16:45:27 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-7903.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v5.0.0!<br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about two new major features:<br />
<ul class="mycode_list"><li>The hashcat brain<br />
</li>
<li>Slow candidates<br />
</li>
</ul>
Before we go into the long read of these new featues, here's all the other changes that come along with this release:<br />
<br />
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 17300 = SHA3-224<br />
</li>
<li>Added hash-mode 17400 = SHA3-256<br />
</li>
<li>Added hash-mode 17500 = SHA3-384<br />
</li>
<li>Added hash-mode 17600 = SHA3-512<br />
</li>
<li>Added hash-mode 17700 = Keccak-224<br />
</li>
<li>Added hash-mode 17800 = Keccak-256<br />
</li>
<li>Added hash-mode 17900 = Keccak-384<br />
</li>
<li>Added hash-mode 18000 = Keccak-512<br />
</li>
<li>Added hash-mode 18100 = TOTP (HMAC-SHA1)<br />
</li>
<li>Removed hash-mode 5000 = SHA-3 (Keccak)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Added additional hybrid "passthrough" rules, to enable variable-length append/prepend attacks<br />
</li>
<li>Added a periodic check for read timeouts in stdin/pipe mode, and abort if no input was provided<br />
</li>
<li>Added a tracker for salts, amplifier and iterations to the status screen<br />
</li>
<li>Added option --markov-hcstat2 to make it clear that the new hcstat2 format (compressed hcstat2gen output) must be used<br />
</li>
<li>Allow bitcoin master key lengths other than 96 bytes (but they must be always multiples of 16)<br />
</li>
<li>Allow hashfile for -m 16800 to be used with -m 16801<br />
</li>
<li>Allow keepass iteration count to be larger than 999999<br />
</li>
<li>Changed algorithms using colon as separators in the hash to not use the hashconfig separator on parsing<br />
</li>
<li>Do not allocate memory segments for bitmap tables if we don't need it for example, in benchmark mode<br />
</li>
<li>Got rid of OPTS_TYPE_HASH_COPY for Ansible Vault<br />
</li>
<li>Improved the speed of the outfile folder scan when using many hashes/salts<br />
</li>
<li>Increased the maximum size of edata2 in Kerberos 5 TGS-REP etype 23<br />
</li>
<li>Make the masks parser more restrictive by rejecting a single '?' at the end of the mask (use ?? instead)<br />
</li>
<li>Override --quiet and show final status screen in case --status is used<br />
</li>
<li>Removed duplicate words in the dictionary file example.dict<br />
</li>
<li>Updated Intel OpenCL runtime version check<br />
</li>
<li>Work around some AMD OpenCL runtime segmentation faults<br />
</li>
<li>Work around some padding issues with host compilers and OpenCL JiT on 32 and 64-bit systems<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a invalid scalar datatype return value in hc_bytealign() where it should be a vector datatype return value<br />
</li>
<li>Fixed a problem with attack mode -a 7 together with stdout mode where the mask bytes were missing in the output<br />
</li>
<li>Fixed a problem with tab completion where --self-test-disable incorrectly expected a further parameter/value<br />
</li>
<li>Fixed a race condition in status view that lead to out-of-bound reads<br />
</li>
<li>Fixed detection of unique ESSID in WPA-PMKID-* parser<br />
</li>
<li>Fixed missing wordlist encoding in combinator mode<br />
</li>
<li>Fixed speed/delay problem when quitting while the outfile folder is being scanned<br />
</li>
<li>Fixed the ciphertext max length in Ansible Vault parser<br />
</li>
<li>Fixed the tokenizer configuration in Postgres hash parser<br />
</li>
<li>Fixed the byte order of digest output for hash-mode 11800 (Streebog-512)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
<div style="text-align: center;" class="mycode_align"><span style="font-weight: bold;" class="mycode_b">Major Feature: Slow Candidates</span></div>
<br />
<hr class="mycode_hr" />
<br />
Hashcat has a new generic password candidate interface called "slow candidates".<br />
<br />
The first goal of this new interface is to allow attachment of advanced password candidate generators in the future (for example hashcat's table attack, kwprocessor, OMEN, PassGAN, PCFG, princeprocessor, etc.). At this time, the only attack modes that have been added are hashcat's straight attack (including rules engine), combinator attack, and mask attack (AKA brute-force with Markov optimizer). You can enable this new general password-candidate interface by using the new -S/--slow-candidates option.<br />
<br />
The second goal of the slow candidates engine is to generate password candidates on-host (on CPU). This is useful when attacking large hashlists with fast hashes (but many salts), or generally with slow hashes. Sometimes we cannot fully run large wordlists in combination with rules, because it simply takes too much time. But if we know of a useful pattern that works well with rules, we often want to use rules with a smaller, targeted wordlist instead, in order to exploit the pattern. On GPU, this creates a bottleneck in hashcat's architecture - because hashcat can only assign the words from the wordlist to the GPU compute units.<br />
<br />
A common workaround for this is to use a pipe, and feed hashcat to itself. But this traditional piping approach came at a cost - no ETA, no way to easily distribute chunks, etc. It was also completely incompatible with overlays like Hashtopolis. And if piping hashcat to itself isn't feasible for some reason, you quickly run into performance problems with small wordlists and large rulesets.<br />
<br />
To demonstrate this, here's an example where you have a very small wordlist with just a single word in the wordlist, but a huge ruleset to exploit some pattern:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; wc -l wordlist.txt<br />
1 wordlist.txt<br />
&#36; wc -l pattern.rule<br />
99092 pattern.rule</blockquote>
<br />
Since the total number of candidates is ([number-of-words-from-wordlist] * [number-of-rules]), this attack should theoretically be enough to fully feed all GPU compute units. But in practice, hashcat works differently internally - mostly to deal with fast hashes. This makes the performance of such an attack terrible:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 400 example400.hash wordlist.txt -r pattern.rule --speed-only<br />
...<br />
Speed.#2.........:      145 H/s (0.07ms)</blockquote>
<br />
This is where slow candidates comes into play. To feed the GPU compute units more efficiently, hashcat applies rules on-host instead, creating a virtual wordlist in memory for fast access. But more importantly from hashcat's perspective, we now have a large wordlist, which allows hashcat to supply all GPU compute units with candidates. Since hashcat still needs to transfer the candidates over PCI-Express, this slows down cracking performance. In exchange, we get a large overall performance increase - multiple times higher, even considering the PCI-Express bottleneck - for both slow hashes and salted fast hashes with many salts,<br />
<br />
Here's the exact same attack, but using the new -S option to turn on slow candidates:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 400 example400.hash wordlist.txt -r pattern.rule --speed-only -S<br />
...<br />
Speed.#2.........:   361.3 kH/s (3.54ms)</blockquote>
<br />
<hr class="mycode_hr" />
<br />
<div style="text-align: center;" class="mycode_align"><span style="font-weight: bold;" class="mycode_b">Major Feature: The hashcat brain</span></div>
<br />
<hr class="mycode_hr" />
<br />
This feature will have a significant impact on the art of password cracking - either cracking alone, in small teams over a local network, or in large teams over the Internet.<br />
<br />
From a technical perspective, the hashcat brain consists of two in-memory databases called "long-term" and "short-term". When I realized that the human brain also has such a long-term and a short-term memory, that's when I chose to name this feature the "hashcat brain". No worries, you don't need to understand artificial intelligence (AI) here - we are simply talking about the "memory features" of the human brain.<br />
<br />
Put simply, the hashcat brain persistently remembers the attacks you've executed against a particular hashlist in the past ... but on a low level.<br />
<br />
Hashcat will check each password candidate against the "brain" to find out if that candidate was already checked in the past and then accept it or reject it. The brain will check each candidate for existence in both the long-term and short-term memory areas. The nice thing is that it does not matter which attack-mode originally was used - it can be straight attack, mask attack or any of the advanced future generators. <br />
<br />
The brain computes a hash (a very fast one called xxHash) of every password candidate and store it in the short-term memory first. Hashcat then starts cracking the usual way. Once it's done cracking, it sends a "commit" signal to the hashcat brain, which then moves the candidates from the short-term memory into the long-term memory.<br />
<br />
The hashcat brain feature uses a client/server architecture. That means that the hashcat brain itself is actually a network server. I know, I know - you don't want any network sockets in your hashcat process? No problem, then disable the feature in the makefile by setting ENABLE_BRAIN=0 and it will be gone forever. <br />
<br />
It's a network server for a reason. This way we can run multiple hashcat clients ... all using the same hashcat brain. This is great for collaboration with many people involved - plus it stays alive after the client shuts down. (Note, however, that even if you want to only use brain functionality locally, you must run two separate instances of hashcat - one to be the brain server, and one to be the client and perform attacks).<br />
<br />
That's it from the technical perspective. It's hard to explain how much potential there is in this, and I'm wondering why I didn't invent this sooner. Maybe it took the Crack Me If You Can password-cracking challenge to realize that we need a feature like this.<br />
<br />
Before you try it out yourself, let me show you a few examples.<br />
<br />
<hr class="mycode_hr" />
<br />
Example 1: Duplicate candidates all around us<br />
<br />
<hr class="mycode_hr" />
<br />
There's no doubt that rule-based attacks are the greatest general purpose attack-modifier on an existing wordlist. But they have a little-known problem: They produce a lot of duplicate candidates. While this is not relevant for fast hashes, it has a large impact on slow hashes.<br />
<br />
In this example, we apply best64.rule to example.dict, and writes the result to test.txt:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat --stdout example.dict -r rules/best64.rule -o test.txt</blockquote>
<br />
Now we can see how many candidates were produced:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; cat test.txt | wc -l<br />
9888032</blockquote>
<br />
And now, let's see how many unique candidates are inside:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; sort -u test.txt | wc -l<br />
7508620</blockquote>
<br />
Of course, the wordlist and rules used have a large impact on the number of duplicates. In our example - a common wordlist and general purpose rule - the average ratio of produced dupes seems to be around 25%. And all of these dupes are detected by the brain:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z example0.hash example.dict -r rules/best64.rule<br />
...<br />
Rejected.........: 2379391/9888032 (24.06%)</blockquote>
<br />
Note:<br />
<ul class="mycode_list"><li>Hashcat brain rejects dynamically created duplicate candidates<br />
</li>
<li>Average dynamically created duplicate candidates is around 25%<br />
</li>
<li>Eliminating the duplicate 25% reduces the attack time by 25%<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 2: stop caring about what you've done in the past<br />
<br />
<hr class="mycode_hr" />
<br />
Think of this: you have a single hash, but it is very high profile. You can use all of your resources. You start cracking - nothing. You try a different attack - still nothing. You're frustrated, but you must continue.. So try more attacks ... but even after two or more days - nothing. You start wondering what you've already done, but you're starting to lose track, getting tired, and making mistakes. Guess what? The hashcat brain comes to the rescue! Here's an attack that you've tried:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc rockyou.txt<br />
...<br />
Time.Started.....: xxx (32 mins, 6 secs)</blockquote>
<br />
Note that the way you use hashcat doesn't change at all. The hash mode and attack mode can be replaced with anything you'd like. The only difference in your attack is that you add the new -z option to enable hashcat's new brain "client" functionality. By using -z you will also automatically enable the use of "slow candidates" -S mode.<br />
<br />
Now let's say that two days later, you forgot that you already performed the attack before. Or maybe it wasn't you who forgot, it's just your coworker on a different machine also trying. This is what happens:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc rockyou.txt<br />
...<br />
Rejected.........: 14344384/14344384 (100.00%)<br />
Time.Started.....: xxx (15 secs)</blockquote>
<br />
The hashcat brain correctly rejected *all* of the candidates.<br />
<br />
Important things to note here:<br />
<ul class="mycode_list"><li>The rejected count exactly matches the keyspace.<br />
</li>
<li>The attack took a bit of time - it's not 0 seconds. The process is not completely without cost. The client must hash all of the candidates, and transfer them to the hashcat brain; the hashcat brain must then search for those candidates in both memory regions, and send back a reject list; and then hashcat must select new candidates to fill the reject gaps, and so on ...<br />
</li>
<li>Most important: 15 seconds is less than 32 minutes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 3: It's the candidates that matter, not the attack<br />
<br />
<hr class="mycode_hr" />
<br />
As I've stated above, it's not the command line that is stored somehow - it's not high level storage in this mode. This is where the hashcat brain server starts to create a strong advantage over manual (even organized) selection of attacks, because of the overlaps that naturally occur when carrying out a variety of attacks:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 ?d?d?d?d<br />
...<br />
Rejected.........: 6359/10000 (63.59%)</blockquote>
<br />
So what happened here? It rejected 63.59% of a mask? Yes, it did. The reason is this:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; grep -c '^[0123456789]\{4\}&#36;' rockyou.txt<br />
6359</blockquote>
<br />
Notes:<br />
<ul class="mycode_list"><li>The previous command from the second example kicks in here. In the rockyou wordlist, we have 6359 pure digits with length 4 and the hashcat brain was able to reject them - because the mask ?d?d?d?d will also produce them<br />
</li>
<li>The hashcat brain does not care about your attack mode. Actually, you could say that the hashcat brain creates a kind of dynamic cross attack-mode while you are using it. As you can see here, attack-mode 0 and attack-mode 3 work together.<br />
</li>
<li>The hashcat brain does not end after hashcat finishes - it stays intact because it's a stand-alone process<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 4: Improve on what you've done in the past<br />
<br />
<hr class="mycode_hr" />
<br />
So you're out of ideas, and you start to run some simple brute-force. But you're clever, because you know the target tends to use the symbol "&#36;" somewhere inside the password, and you optimize your mask for this. Let's start with an example not using the hashcat brain:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36; ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (5 hours, 37 mins)<br />
Progress.........: 2565726409/2565726409 (100.00%)</blockquote>
<br />
Damn - it did not crack. But then your coworker shows up and tells you that he found out that the target isn't just using the "&#36;" symbol in his passwords, but also the "!" symbol. Damn, this makes your previous run (which took 5.5 hours) completely useless - wasted! You now need even more time for the correct run:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36;! ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (6 hours, 39 mins)<br />
Progress.........: 3010936384/3010936384 (100.00%)</blockquote>
<br />
Now we do the same again, but with hashcat brain enabled. All of the work of that first command will no longer be wasted. The same commandline history, but this time with hashcat brain enabled, looks like this:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36; ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (5 hours, 37 mins)</blockquote>
<br />
But now, if we add the "!" character, we see the difference:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36;! ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (1 hour, 5 mins)</blockquote>
<br />
So you can see here how the hashcat brain helps you to reduce the time for the second attack, from ~6 hours to ~1 hour.<br />
<br />
<hr class="mycode_hr" />
<br />
Example 5: The resurrection of the random rules<br />
<br />
<hr class="mycode_hr" />
<br />
Random rules and salts? No way! Take a look at this, it's horrible:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; cat wordlist.txt<br />
password<br />
&#36; ./hashcat wordlist.txt --stdout -g 100000 | sort -u | wc -l<br />
20473</blockquote>
<br />
What I'm trying to show here is how inefficient the random rules actually are (and always have been). They produce tons of duplicate work.<br />
<br />
As you can see from the above example, only 20473 of 100000 tested passwords of the produced random candidates are unique - and the remaining 80% is just wasted time.<br />
<br />
I cannot believe that I've never thought about this in detail, but now the hashcat brain brings this to an end:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>./hashcat -z hashlist.txt wordlist.txt -g 100000<br />
...<br />
Rejected.........: 82093/100000 (82.09%)</blockquote>
<br />
This alone gives -g a new role in password cracking. If you've ever attended a password cracking contest, you know how important it is to find the patterns that were used to generate the password candidates. Because finding new patterns using the combination of random-rules and debug-rules is a very efficient way to find new attack vectors.<br />
<br />
For example, Team Hashcat managed to crack 188k/300k of the SSHA hashlist from the 2018 CMIYC contest - a strong showing. But with random rules, there's a really good chance that you'll discover what you missed. Here's an example of an attack I ran for only few minutes while writing this document:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 111 c0_111.list.txt wordlist.txt -g 100000 --debug-mode 4<br />
...<br />
INFO: Removed 188292 hashes found in potfile.<br />
...<br />
time:Z4 R3:tim2eeee<br />
sexual:Y3 Z5 O35<img src="https://hashcat.net/forum/images/smilies/confused.gif" alt="Confused" title="Confused" class="smilie smilie_13" />exllllll<br />
poodle:Y2 T3 sBh:pooDlele<br />
pass123:C z5:ppppppASS123<br />
pool:y4 Z2 Y1:poolpoollll<br />
profit:o8F ^_:_profit<br />
smashing:Z3<img src="https://hashcat.net/forum/images/smilies/confused.gif" alt="Confused" title="Confused" class="smilie smilie_13" />mashingggg</blockquote>
<br />
These are real passwords that Team Hashcat didn't crack during the contest. What matters here is that you can see hints for possible patterns - which counts much more than just cracking a single password. And if you run the exact same command again, hashcat will generate different rules and you get more cracks, and discover more new patterns. You can do this again and again. We call this technique "raking".<br />
<br />
Note: It can occur that a pattern discovered from random rules matches an already known pattern. In such a case, it's a strong sign that this pattern may have been searched already, but has not yet been searched exhaustively. Perhaps a previous attack was stopped too early. But with the hashcat brain, that's no longer important - we can just apply the pattern without any worry about creating double work.<br />
<br />
<hr class="mycode_hr" />
<br />
The costs of hashcat brain<br />
<br />
<hr class="mycode_hr" />
<br />
It should now be clear now what the potential is here. There are many other examples where this feature really kicks in, but I'm sure you already have your own ideas.<br />
<br />
Of course, the hashcat brain does not come for free - there are limitations. It's important to know some key numbers to decide when to use it (and when not to).<br />
<br />
Each password candidate creates a hash of 8 bytes that has to be transferred, looked up and stored in the hashcat brain. This brings us to the first question: What kind of hardware do you need? Fortunately, this is pretty easy to calculate. If you have a server with 64 GB of physical memory, then you can store 8,000,000,000 candidates. I guess that's the typical size of every serious password cracker's wordlist; if you have more, you typically have too much trash in your wordlists. If you have less, then you just haven't been collecting them long enough.<br />
<br />
So let's assume a candidate list size of 8,000,000,000. That doesn't sound like too much - especially if you want to work with rules and masks. It should be clear that using the hashcat brain against a raw MD5 is not very efficient. But now things become interesting, because of some unexpected effects that kick in.<br />
<br />
Imagine you have a salted MD5 list, let's say VBULL which is a fast hash (not a slow hash) - and you have many of them. In thise case, each of the salts starts to work for us.<br />
<br />
Yes, you read that right - the more salts, the better!!<br />
<br />
Let's continue with our calculation and our 8,000,000,000 password example. The speed of a typical VBULL on a Vega64 is 2170.6 MH/s. If we have 300,000 salts, the speed drops to 7235 H/s. Now to feed the hashcat brain at a rate of 7235 H/s, it will take you 1,105,736 seconds (or 12 days). That means you can run the hashcat brain for 12 days. It's an OK time I think, though I don't let many attacks run for such a long time. Also, this is an inexpensive server with 64GB physical RAM, and you could simply add more RAM, right? At this point we should also consider using swap memory. I think there's actually room for that - but I leave testing this to our users.<br />
<br />
Lookup times are pretty good. The hashcat brain uses two binary trees, which means that the more hashes that are added, the more efficient it becomes. Of course, the lookup times will increase drastically in the first moments, but will stabilize at some point. Note that we typically do not compare just one entry vs. million of entries - we compare hundreds of thousands of entries vs. millions of entries.<br />
<br />
<hr class="mycode_hr" />
<br />
Technical details on the hashcat brain server<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>The hashcat brain server saves the long-term memory to disc every 5 minutes automatically<br />
</li>
<li>The server also saves the long-term memory if the hashcat brain server is killed using Ctrl-C<br />
</li>
<li>There's no mitigation against database poisoning - this would cost too many resources<br />
</li>
<li>There's currently no mitigation against an evil client requesting the server to allocate too much memory<br />
</li>
<li>Make sure your hashcat brain server is protected with a good password, because you have to trust your clients<br />
</li>
<li>I'll add a standalone hashcat brain seeding tool later which enables you to easily push all the words from an entire wordlist or a mask very fast. At this time you can use the --hashcat-session option to do so with hashcat itself<br />
</li>
<li>You can use --brain-server-whitelist in order to force the clients to use a specific hashlist<br />
</li>
<li>The protocol used is pretty simple and does not contain hashcat specific information, which should make it possible for other cracking tools to utilize the server, too<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical details on the hashcat brain client<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>The client calculates the hashcat brain session based on the hashlist entries, to efficiently let a high number of salts work for us. You can override the session calculated with --brain-session, which makes sense if you want to use a fast hash in order to "seed" the hashcat brain with already-tried wordlists or masks.<br />
</li>
<li>The use of --remove is forbidden, but this should not really be a problem, since the potfile will do the same for you. Make sure to remove --potfile-disable in case you use it.<br />
</li>
<li>If multiple clients use the same attack on the same hashcat brain (which is a clever idea), you end up with a distributed solution - without the need of an overlay for keyspace distribution. This is not the intended use of the hashcat brain and should not be used as it. I'll explain later.<br />
</li>
<li>Since each password candidate is creating a hash of 8 bytes, some serious network upstream traffic can be generated from your client. I'll explain later.<br />
</li>
<li>The use of xxHash as hash is not required; we can exchange it with whatever hash we want. However so far it's doing a great job.<br />
</li>
</ul>
The status view was updated to give you some real-time statistics about the network usage:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Speed.#1.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#2.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#3.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#4.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
...<br />
Brain.Link.#1....: RX: 0 B (0.00 Mbps), TX: 4.1 MB (1.22 Mbps), sending<br />
Brain.Link.#2....: RX: 0 B (0.00 Mbps), TX: 4.7 MB (1.09 Mbps), sending<br />
Brain.Link.#3....: RX: 0 B (0.00 Mbps), TX: 3.5 MB (0.88 Mbps), sending<br />
Brain.Link.#4....: RX: 0 B (0.00 Mbps), TX: 4.1 MB (0.69 Mbps), sending</blockquote>
<br />
When the data is transferred, there's no cracking. You can see it's doing 0 H/s. But if you have a slow hash, or a fast hash with multiple salts, this time can be seen as minor overhead. The major time taken is still in the cracking phase. So if you have a fast hash, the more salts the better! As soon as hashcat is done with the network communication, it's starting to work as always:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Speed.#1.........:   869.1 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#2.........:   870.8 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#3.........:   876.2 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#4.........:   872.6 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
...<br />
Brain.Link.#1....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#2....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#3....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#4....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle</blockquote>
<br />
<hr class="mycode_hr" />
<br />
The brain and the bottlenecks<br />
<br />
<hr class="mycode_hr" />
<br />
While working with Team Hashcat to test how the brain performs with large numbers of clients and over the Internet, I learned about some serious bottlenecks.<br />
<br />
The most important insight was about the performance of lookups against the brain. That should be obvious solely from the huge amount of data that we're talking about here, but the brain does not just have to look up millions of candidates against millions of existing database entries - it must also insert them into the database after each commit and ensure the ordering stays intact otherwise it would break the binary tree. This simply takes time, even if the lookup process was already threaded. But the feature was so promising that I did not want to abandon development just because of the performance challenge.<br />
<br />
But to start from the beginning, keep the following number in mind: 50kH/s<br />
<br />
This was the speed that was the maximum performance of the hashcat brain after the first development alpha was finished. In other words, if the performance of your attack was faster than this speed, the hashcat brain becomes the bottleneck. Now there's good and bad news about this:<br />
<br />
Bad: This is the total number. Which means, the entire network of all GPUs participating as clients cannot create more than 50kH/s before the bottleneck effect kicks in.<br />
<br />
Good: Salts come to the rescue. If you have a large salted hashlist - with, for example 300,000 SSHA1 hashes (as in the last Crack Me If You Can) - this means that the real maximum performance that the brain can handle jumps to 15 GH/s. (You can simply multiply the 50kH/s with the number of unique salts of your hashlist.)<br />
<br />
Then there's another bottleneck: the network bandwidth required. For those of you who plan to use the brain inside a local network with 100Mbit, you can skip this section entirely. But for those who plan to use the brain in a large group, over VPN or in general over the Internet, keep in mind that a single GPU can create around 5Mbit/s of upstream before bandwidth becomes a bottleneck. That doesn't mean that a hashcat client will stop working - it will just reduce your theoretical maximum cracking performance.<br />
<br />
Both of these lessons learned <span style="text-decoration: underline;" class="mycode_u"><span style="font-weight: bold;" class="mycode_b">lead to an upgrade</span></span> to the brain during development. (This means that everything that you've read up to this point is already outdated!) The bottlenecks still exist, but there's kind of a mitigation to them. To better understand what we mean when talking about how to mitigate the problem, we need new terminology in the hashcat brain universe - something we'll call brain client "features".<br />
<br />
When running as a client, hashcat now has a new parameter called --brain-client-features. With this parameter, you can select from two features (so far) that the client has to offer:<br />
<ul class="mycode_list"><li>Brain "Hashes" feature<br />
</li>
<li>Brain "Attacks" feature<br />
</li>
</ul>
The brain "hashes" feature is everything that we've explained from the beginning - the *low-level* function of the brain. The brain "attacks" feature is the *high-level* strategy which I added to mitigate the bottlenecks. By default, both "features" are active, and run in parallel. Depending on your use case, you can selectively enable or disable either one.<br />
<br />
The brain "attack" feature should be explained in more detail in order to understand what it is doing. It is a high-level approach, or a compressed hint. Hashcat clients request this "hint" from the brain about a given attack as soon as the client is assigned a new work package from the local hashcat dispatcher. For example, if you have a system with 4 GPUs, the local hashcat dispatcher is responsible for distributing the workload across the local GPUs. What's new is that before a GPU starts actually working on the package, it asks the brain for a high level confirmation of whether or not to proceed. The process of how this work is basically the same as with the low-level architecture: the client "reserves" a package when the hashcat brain moves it to short-term memory - and once it is done, it will be moved to long-term memory.<br />
<br />
The attack package itself is another 8-byte checksum - but that's more than enough to assign all feasible combinations of attacks a unique identifier. For example, hashcat takes options like the attack mode itself, rules with -r (but also -j and -k rules), masks, user-defined custom charset, Markov options, a checksum of the wordlists (if used) and so on. All of these options are combined in a repeatable way, and from that unique combination of options, a checksum is created that uniquely "fingerprints" all of the components of the attack.<br />
<br />
When the clients connect to the hashcat brain, they send this attack checksum (along with the session ID) to the brain, so that the brain knows precisely which attack is running on a particular hashcat client. Now, if the local dispatcher creates a new package, the local start point and end point of this attack is sent to the brain so that the brain can track it. The client will automatically reject an entire package - for example, an entire wordlist, or an entire wordlist plus a specific list of rules - if the attack has some overlaps. This is done *before* the client sends any password candidate hashes to the brain. This means that if a package is rejected:<br />
<ul class="mycode_list"><li>The client doesn't need to transfer the hashes (which mitigates the bandwidth bottleneck)<br />
</li>
<li>The brain server doesn't need to compare it (which mitigates the lookup bottleneck)<br />
</li>
</ul>
If the attack package itself is not rejected, the hashes are still sent to the brain and compared.<br />
<br />
The hashcat brain is kind of clever when it comes to the packages. It recognizes overlapping packages on a low level - in cases where only part of one package overlaps with another package. When this occurs, the brain only rejects the overlapping section of the package and informs the client about that. It is then up to the client to decide whether it wants to either launch the attack with a minimized package size, or to ask the local dispatcher for another (smaller) portion to fill the gap. Of course, this newly creates portion is also first sent to the brain, in case it can be rejected. The entire process is packed into a loop and it will repeat the process until the client decides that the package is big enough (and the default setting for accepting a package and to start executing is half of the original package size.)<br />
<br />
Something I realized - after I had already finished with the implementation of the high-level feature - was that the new brain "attack" feature is a very strong feature for standalone use. By setting --brain-client-features 2, you tell the client to only use the attack feature. This completely eliminates all bottlenecks - the network bandwidth, but even more importantly, the lookup bottleneck. The drawback is that you lose cross-attack functionality.<br />
<br />
If you think that this new feature is a nice way to get a native hashcat multi-system distribution ... you are wrong. The brain client still requires running in -S mode, which means that this is all about slow hashes or fast hashes with many salts. There's also no wordlist distribution, and most importantly, there's no distribution of cracked hashes across all network clients. So the brain "attack" feature is not meant to be an alternative to existing distribution solutions, but just as a mitigation for the bottlenecks (and it works exactly as such).<br />
<br />
<hr class="mycode_hr" />
<br />
Commandline Options<br />
<br />
<hr class="mycode_hr" />
<br />
Most of the commands are self-explaining. I'm just adding them here to inform you which ones exist:<br />
<ul class="mycode_list"><li>Add new option --brain-server to start a hashcat brain server<br />
</li>
<li>Add new option --brain-client to start a hashcat brain client, automatically activates --slow-candidates<br />
</li>
<li>Add new option --brain-host and --brain-port to specify ip and port of brain server, both listening and connecting<br />
</li>
<li>Add new option --brain-session to override automatically calculated brain session ID<br />
</li>
<li>Add new option --brain-session-whitelist to allow only explicit written session ID on brain server<br />
</li>
<li>Add new option --brain-password to specify the brain server authentication password<br />
</li>
<li>Add new option --brain-client-features which allows enable and disable certain features of the hashcat brain<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Final words<br />
<br />
<hr class="mycode_hr" />
<br />
The hashcat brain development is now in its second month, and from what I've seen so far, it works well enough to release it. In all that time, I've also improved the maximum performance of the low-level hash lookup from 50kH/s to roughly 650kH/s (depending on which type of hardware the brain server is running on).<br />
<br />
Of course, it's possible that there are some bugs that I did not detect while developing this monster, and it's simply too much code to verify all of the different possible behavior in different situations. Please let me know if you find any strange behavior, and I'll try to fix is as quickly as possible.<br />
<br />
This feature, in my opinion, will significantly alter the workflow of someone who is doing serious cracking on a daily basis. It's not just the time saving effect - it's mostly the confidence in your own work. This confidence is fed by two factors: first, that you know the brain will rule out duplicated work (when you simply didn't have enough time to track all of the details when running different attacks), and second, that you get immediate visible feedback when your attacks overlap and you're duplicating work. When you see that a given attack is producing a 20% or higher reject rate, it will give you a better understanding of what type of work is actually being performed by your hardware. This gives you deeper insight, and the chance to update and improve your own attack strategies at a high level.<br />
<br />
-- atom<br />
<br />
PS: If you build from sources, do not forget to run "git submodule update --init" to get the xxHash headers]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v5.0.0!<br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about two new major features:<br />
<ul class="mycode_list"><li>The hashcat brain<br />
</li>
<li>Slow candidates<br />
</li>
</ul>
Before we go into the long read of these new featues, here's all the other changes that come along with this release:<br />
<br />
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 17300 = SHA3-224<br />
</li>
<li>Added hash-mode 17400 = SHA3-256<br />
</li>
<li>Added hash-mode 17500 = SHA3-384<br />
</li>
<li>Added hash-mode 17600 = SHA3-512<br />
</li>
<li>Added hash-mode 17700 = Keccak-224<br />
</li>
<li>Added hash-mode 17800 = Keccak-256<br />
</li>
<li>Added hash-mode 17900 = Keccak-384<br />
</li>
<li>Added hash-mode 18000 = Keccak-512<br />
</li>
<li>Added hash-mode 18100 = TOTP (HMAC-SHA1)<br />
</li>
<li>Removed hash-mode 5000 = SHA-3 (Keccak)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Added additional hybrid "passthrough" rules, to enable variable-length append/prepend attacks<br />
</li>
<li>Added a periodic check for read timeouts in stdin/pipe mode, and abort if no input was provided<br />
</li>
<li>Added a tracker for salts, amplifier and iterations to the status screen<br />
</li>
<li>Added option --markov-hcstat2 to make it clear that the new hcstat2 format (compressed hcstat2gen output) must be used<br />
</li>
<li>Allow bitcoin master key lengths other than 96 bytes (but they must be always multiples of 16)<br />
</li>
<li>Allow hashfile for -m 16800 to be used with -m 16801<br />
</li>
<li>Allow keepass iteration count to be larger than 999999<br />
</li>
<li>Changed algorithms using colon as separators in the hash to not use the hashconfig separator on parsing<br />
</li>
<li>Do not allocate memory segments for bitmap tables if we don't need it for example, in benchmark mode<br />
</li>
<li>Got rid of OPTS_TYPE_HASH_COPY for Ansible Vault<br />
</li>
<li>Improved the speed of the outfile folder scan when using many hashes/salts<br />
</li>
<li>Increased the maximum size of edata2 in Kerberos 5 TGS-REP etype 23<br />
</li>
<li>Make the masks parser more restrictive by rejecting a single '?' at the end of the mask (use ?? instead)<br />
</li>
<li>Override --quiet and show final status screen in case --status is used<br />
</li>
<li>Removed duplicate words in the dictionary file example.dict<br />
</li>
<li>Updated Intel OpenCL runtime version check<br />
</li>
<li>Work around some AMD OpenCL runtime segmentation faults<br />
</li>
<li>Work around some padding issues with host compilers and OpenCL JiT on 32 and 64-bit systems<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a invalid scalar datatype return value in hc_bytealign() where it should be a vector datatype return value<br />
</li>
<li>Fixed a problem with attack mode -a 7 together with stdout mode where the mask bytes were missing in the output<br />
</li>
<li>Fixed a problem with tab completion where --self-test-disable incorrectly expected a further parameter/value<br />
</li>
<li>Fixed a race condition in status view that lead to out-of-bound reads<br />
</li>
<li>Fixed detection of unique ESSID in WPA-PMKID-* parser<br />
</li>
<li>Fixed missing wordlist encoding in combinator mode<br />
</li>
<li>Fixed speed/delay problem when quitting while the outfile folder is being scanned<br />
</li>
<li>Fixed the ciphertext max length in Ansible Vault parser<br />
</li>
<li>Fixed the tokenizer configuration in Postgres hash parser<br />
</li>
<li>Fixed the byte order of digest output for hash-mode 11800 (Streebog-512)<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
<div style="text-align: center;" class="mycode_align"><span style="font-weight: bold;" class="mycode_b">Major Feature: Slow Candidates</span></div>
<br />
<hr class="mycode_hr" />
<br />
Hashcat has a new generic password candidate interface called "slow candidates".<br />
<br />
The first goal of this new interface is to allow attachment of advanced password candidate generators in the future (for example hashcat's table attack, kwprocessor, OMEN, PassGAN, PCFG, princeprocessor, etc.). At this time, the only attack modes that have been added are hashcat's straight attack (including rules engine), combinator attack, and mask attack (AKA brute-force with Markov optimizer). You can enable this new general password-candidate interface by using the new -S/--slow-candidates option.<br />
<br />
The second goal of the slow candidates engine is to generate password candidates on-host (on CPU). This is useful when attacking large hashlists with fast hashes (but many salts), or generally with slow hashes. Sometimes we cannot fully run large wordlists in combination with rules, because it simply takes too much time. But if we know of a useful pattern that works well with rules, we often want to use rules with a smaller, targeted wordlist instead, in order to exploit the pattern. On GPU, this creates a bottleneck in hashcat's architecture - because hashcat can only assign the words from the wordlist to the GPU compute units.<br />
<br />
A common workaround for this is to use a pipe, and feed hashcat to itself. But this traditional piping approach came at a cost - no ETA, no way to easily distribute chunks, etc. It was also completely incompatible with overlays like Hashtopolis. And if piping hashcat to itself isn't feasible for some reason, you quickly run into performance problems with small wordlists and large rulesets.<br />
<br />
To demonstrate this, here's an example where you have a very small wordlist with just a single word in the wordlist, but a huge ruleset to exploit some pattern:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; wc -l wordlist.txt<br />
1 wordlist.txt<br />
&#36; wc -l pattern.rule<br />
99092 pattern.rule</blockquote>
<br />
Since the total number of candidates is ([number-of-words-from-wordlist] * [number-of-rules]), this attack should theoretically be enough to fully feed all GPU compute units. But in practice, hashcat works differently internally - mostly to deal with fast hashes. This makes the performance of such an attack terrible:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 400 example400.hash wordlist.txt -r pattern.rule --speed-only<br />
...<br />
Speed.#2.........:      145 H/s (0.07ms)</blockquote>
<br />
This is where slow candidates comes into play. To feed the GPU compute units more efficiently, hashcat applies rules on-host instead, creating a virtual wordlist in memory for fast access. But more importantly from hashcat's perspective, we now have a large wordlist, which allows hashcat to supply all GPU compute units with candidates. Since hashcat still needs to transfer the candidates over PCI-Express, this slows down cracking performance. In exchange, we get a large overall performance increase - multiple times higher, even considering the PCI-Express bottleneck - for both slow hashes and salted fast hashes with many salts,<br />
<br />
Here's the exact same attack, but using the new -S option to turn on slow candidates:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 400 example400.hash wordlist.txt -r pattern.rule --speed-only -S<br />
...<br />
Speed.#2.........:   361.3 kH/s (3.54ms)</blockquote>
<br />
<hr class="mycode_hr" />
<br />
<div style="text-align: center;" class="mycode_align"><span style="font-weight: bold;" class="mycode_b">Major Feature: The hashcat brain</span></div>
<br />
<hr class="mycode_hr" />
<br />
This feature will have a significant impact on the art of password cracking - either cracking alone, in small teams over a local network, or in large teams over the Internet.<br />
<br />
From a technical perspective, the hashcat brain consists of two in-memory databases called "long-term" and "short-term". When I realized that the human brain also has such a long-term and a short-term memory, that's when I chose to name this feature the "hashcat brain". No worries, you don't need to understand artificial intelligence (AI) here - we are simply talking about the "memory features" of the human brain.<br />
<br />
Put simply, the hashcat brain persistently remembers the attacks you've executed against a particular hashlist in the past ... but on a low level.<br />
<br />
Hashcat will check each password candidate against the "brain" to find out if that candidate was already checked in the past and then accept it or reject it. The brain will check each candidate for existence in both the long-term and short-term memory areas. The nice thing is that it does not matter which attack-mode originally was used - it can be straight attack, mask attack or any of the advanced future generators. <br />
<br />
The brain computes a hash (a very fast one called xxHash) of every password candidate and store it in the short-term memory first. Hashcat then starts cracking the usual way. Once it's done cracking, it sends a "commit" signal to the hashcat brain, which then moves the candidates from the short-term memory into the long-term memory.<br />
<br />
The hashcat brain feature uses a client/server architecture. That means that the hashcat brain itself is actually a network server. I know, I know - you don't want any network sockets in your hashcat process? No problem, then disable the feature in the makefile by setting ENABLE_BRAIN=0 and it will be gone forever. <br />
<br />
It's a network server for a reason. This way we can run multiple hashcat clients ... all using the same hashcat brain. This is great for collaboration with many people involved - plus it stays alive after the client shuts down. (Note, however, that even if you want to only use brain functionality locally, you must run two separate instances of hashcat - one to be the brain server, and one to be the client and perform attacks).<br />
<br />
That's it from the technical perspective. It's hard to explain how much potential there is in this, and I'm wondering why I didn't invent this sooner. Maybe it took the Crack Me If You Can password-cracking challenge to realize that we need a feature like this.<br />
<br />
Before you try it out yourself, let me show you a few examples.<br />
<br />
<hr class="mycode_hr" />
<br />
Example 1: Duplicate candidates all around us<br />
<br />
<hr class="mycode_hr" />
<br />
There's no doubt that rule-based attacks are the greatest general purpose attack-modifier on an existing wordlist. But they have a little-known problem: They produce a lot of duplicate candidates. While this is not relevant for fast hashes, it has a large impact on slow hashes.<br />
<br />
In this example, we apply best64.rule to example.dict, and writes the result to test.txt:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat --stdout example.dict -r rules/best64.rule -o test.txt</blockquote>
<br />
Now we can see how many candidates were produced:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; cat test.txt | wc -l<br />
9888032</blockquote>
<br />
And now, let's see how many unique candidates are inside:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; sort -u test.txt | wc -l<br />
7508620</blockquote>
<br />
Of course, the wordlist and rules used have a large impact on the number of duplicates. In our example - a common wordlist and general purpose rule - the average ratio of produced dupes seems to be around 25%. And all of these dupes are detected by the brain:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z example0.hash example.dict -r rules/best64.rule<br />
...<br />
Rejected.........: 2379391/9888032 (24.06%)</blockquote>
<br />
Note:<br />
<ul class="mycode_list"><li>Hashcat brain rejects dynamically created duplicate candidates<br />
</li>
<li>Average dynamically created duplicate candidates is around 25%<br />
</li>
<li>Eliminating the duplicate 25% reduces the attack time by 25%<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 2: stop caring about what you've done in the past<br />
<br />
<hr class="mycode_hr" />
<br />
Think of this: you have a single hash, but it is very high profile. You can use all of your resources. You start cracking - nothing. You try a different attack - still nothing. You're frustrated, but you must continue.. So try more attacks ... but even after two or more days - nothing. You start wondering what you've already done, but you're starting to lose track, getting tired, and making mistakes. Guess what? The hashcat brain comes to the rescue! Here's an attack that you've tried:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc rockyou.txt<br />
...<br />
Time.Started.....: xxx (32 mins, 6 secs)</blockquote>
<br />
Note that the way you use hashcat doesn't change at all. The hash mode and attack mode can be replaced with anything you'd like. The only difference in your attack is that you add the new -z option to enable hashcat's new brain "client" functionality. By using -z you will also automatically enable the use of "slow candidates" -S mode.<br />
<br />
Now let's say that two days later, you forgot that you already performed the attack before. Or maybe it wasn't you who forgot, it's just your coworker on a different machine also trying. This is what happens:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc rockyou.txt<br />
...<br />
Rejected.........: 14344384/14344384 (100.00%)<br />
Time.Started.....: xxx (15 secs)</blockquote>
<br />
The hashcat brain correctly rejected *all* of the candidates.<br />
<br />
Important things to note here:<br />
<ul class="mycode_list"><li>The rejected count exactly matches the keyspace.<br />
</li>
<li>The attack took a bit of time - it's not 0 seconds. The process is not completely without cost. The client must hash all of the candidates, and transfer them to the hashcat brain; the hashcat brain must then search for those candidates in both memory regions, and send back a reject list; and then hashcat must select new candidates to fill the reject gaps, and so on ...<br />
</li>
<li>Most important: 15 seconds is less than 32 minutes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 3: It's the candidates that matter, not the attack<br />
<br />
<hr class="mycode_hr" />
<br />
As I've stated above, it's not the command line that is stored somehow - it's not high level storage in this mode. This is where the hashcat brain server starts to create a strong advantage over manual (even organized) selection of attacks, because of the overlaps that naturally occur when carrying out a variety of attacks:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 ?d?d?d?d<br />
...<br />
Rejected.........: 6359/10000 (63.59%)</blockquote>
<br />
So what happened here? It rejected 63.59% of a mask? Yes, it did. The reason is this:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; grep -c '^[0123456789]\{4\}&#36;' rockyou.txt<br />
6359</blockquote>
<br />
Notes:<br />
<ul class="mycode_list"><li>The previous command from the second example kicks in here. In the rockyou wordlist, we have 6359 pure digits with length 4 and the hashcat brain was able to reject them - because the mask ?d?d?d?d will also produce them<br />
</li>
<li>The hashcat brain does not care about your attack mode. Actually, you could say that the hashcat brain creates a kind of dynamic cross attack-mode while you are using it. As you can see here, attack-mode 0 and attack-mode 3 work together.<br />
</li>
<li>The hashcat brain does not end after hashcat finishes - it stays intact because it's a stand-alone process<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Example 4: Improve on what you've done in the past<br />
<br />
<hr class="mycode_hr" />
<br />
So you're out of ideas, and you start to run some simple brute-force. But you're clever, because you know the target tends to use the symbol "&#36;" somewhere inside the password, and you optimize your mask for this. Let's start with an example not using the hashcat brain:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36; ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (5 hours, 37 mins)<br />
Progress.........: 2565726409/2565726409 (100.00%)</blockquote>
<br />
Damn - it did not crack. But then your coworker shows up and tells you that he found out that the target isn't just using the "&#36;" symbol in his passwords, but also the "!" symbol. Damn, this makes your previous run (which took 5.5 hours) completely useless - wasted! You now need even more time for the correct run:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36;! ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (6 hours, 39 mins)<br />
Progress.........: 3010936384/3010936384 (100.00%)</blockquote>
<br />
Now we do the same again, but with hashcat brain enabled. All of the work of that first command will no longer be wasted. The same commandline history, but this time with hashcat brain enabled, looks like this:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36; ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (5 hours, 37 mins)</blockquote>
<br />
But now, if we add the "!" character, we see the difference:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 6211 hashcat_ripemd160_aes.tc -a 3 -1 ?l?d&#36;! ?1?1?1?1?1?1<br />
...<br />
Time.Started.....: xxx (1 hour, 5 mins)</blockquote>
<br />
So you can see here how the hashcat brain helps you to reduce the time for the second attack, from ~6 hours to ~1 hour.<br />
<br />
<hr class="mycode_hr" />
<br />
Example 5: The resurrection of the random rules<br />
<br />
<hr class="mycode_hr" />
<br />
Random rules and salts? No way! Take a look at this, it's horrible:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; cat wordlist.txt<br />
password<br />
&#36; ./hashcat wordlist.txt --stdout -g 100000 | sort -u | wc -l<br />
20473</blockquote>
<br />
What I'm trying to show here is how inefficient the random rules actually are (and always have been). They produce tons of duplicate work.<br />
<br />
As you can see from the above example, only 20473 of 100000 tested passwords of the produced random candidates are unique - and the remaining 80% is just wasted time.<br />
<br />
I cannot believe that I've never thought about this in detail, but now the hashcat brain brings this to an end:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>./hashcat -z hashlist.txt wordlist.txt -g 100000<br />
...<br />
Rejected.........: 82093/100000 (82.09%)</blockquote>
<br />
This alone gives -g a new role in password cracking. If you've ever attended a password cracking contest, you know how important it is to find the patterns that were used to generate the password candidates. Because finding new patterns using the combination of random-rules and debug-rules is a very efficient way to find new attack vectors.<br />
<br />
For example, Team Hashcat managed to crack 188k/300k of the SSHA hashlist from the 2018 CMIYC contest - a strong showing. But with random rules, there's a really good chance that you'll discover what you missed. Here's an example of an attack I ran for only few minutes while writing this document:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>&#36; ./hashcat -z -m 111 c0_111.list.txt wordlist.txt -g 100000 --debug-mode 4<br />
...<br />
INFO: Removed 188292 hashes found in potfile.<br />
...<br />
time:Z4 R3:tim2eeee<br />
sexual:Y3 Z5 O35<img src="https://hashcat.net/forum/images/smilies/confused.gif" alt="Confused" title="Confused" class="smilie smilie_13" />exllllll<br />
poodle:Y2 T3 sBh:pooDlele<br />
pass123:C z5:ppppppASS123<br />
pool:y4 Z2 Y1:poolpoollll<br />
profit:o8F ^_:_profit<br />
smashing:Z3<img src="https://hashcat.net/forum/images/smilies/confused.gif" alt="Confused" title="Confused" class="smilie smilie_13" />mashingggg</blockquote>
<br />
These are real passwords that Team Hashcat didn't crack during the contest. What matters here is that you can see hints for possible patterns - which counts much more than just cracking a single password. And if you run the exact same command again, hashcat will generate different rules and you get more cracks, and discover more new patterns. You can do this again and again. We call this technique "raking".<br />
<br />
Note: It can occur that a pattern discovered from random rules matches an already known pattern. In such a case, it's a strong sign that this pattern may have been searched already, but has not yet been searched exhaustively. Perhaps a previous attack was stopped too early. But with the hashcat brain, that's no longer important - we can just apply the pattern without any worry about creating double work.<br />
<br />
<hr class="mycode_hr" />
<br />
The costs of hashcat brain<br />
<br />
<hr class="mycode_hr" />
<br />
It should now be clear now what the potential is here. There are many other examples where this feature really kicks in, but I'm sure you already have your own ideas.<br />
<br />
Of course, the hashcat brain does not come for free - there are limitations. It's important to know some key numbers to decide when to use it (and when not to).<br />
<br />
Each password candidate creates a hash of 8 bytes that has to be transferred, looked up and stored in the hashcat brain. This brings us to the first question: What kind of hardware do you need? Fortunately, this is pretty easy to calculate. If you have a server with 64 GB of physical memory, then you can store 8,000,000,000 candidates. I guess that's the typical size of every serious password cracker's wordlist; if you have more, you typically have too much trash in your wordlists. If you have less, then you just haven't been collecting them long enough.<br />
<br />
So let's assume a candidate list size of 8,000,000,000. That doesn't sound like too much - especially if you want to work with rules and masks. It should be clear that using the hashcat brain against a raw MD5 is not very efficient. But now things become interesting, because of some unexpected effects that kick in.<br />
<br />
Imagine you have a salted MD5 list, let's say VBULL which is a fast hash (not a slow hash) - and you have many of them. In thise case, each of the salts starts to work for us.<br />
<br />
Yes, you read that right - the more salts, the better!!<br />
<br />
Let's continue with our calculation and our 8,000,000,000 password example. The speed of a typical VBULL on a Vega64 is 2170.6 MH/s. If we have 300,000 salts, the speed drops to 7235 H/s. Now to feed the hashcat brain at a rate of 7235 H/s, it will take you 1,105,736 seconds (or 12 days). That means you can run the hashcat brain for 12 days. It's an OK time I think, though I don't let many attacks run for such a long time. Also, this is an inexpensive server with 64GB physical RAM, and you could simply add more RAM, right? At this point we should also consider using swap memory. I think there's actually room for that - but I leave testing this to our users.<br />
<br />
Lookup times are pretty good. The hashcat brain uses two binary trees, which means that the more hashes that are added, the more efficient it becomes. Of course, the lookup times will increase drastically in the first moments, but will stabilize at some point. Note that we typically do not compare just one entry vs. million of entries - we compare hundreds of thousands of entries vs. millions of entries.<br />
<br />
<hr class="mycode_hr" />
<br />
Technical details on the hashcat brain server<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>The hashcat brain server saves the long-term memory to disc every 5 minutes automatically<br />
</li>
<li>The server also saves the long-term memory if the hashcat brain server is killed using Ctrl-C<br />
</li>
<li>There's no mitigation against database poisoning - this would cost too many resources<br />
</li>
<li>There's currently no mitigation against an evil client requesting the server to allocate too much memory<br />
</li>
<li>Make sure your hashcat brain server is protected with a good password, because you have to trust your clients<br />
</li>
<li>I'll add a standalone hashcat brain seeding tool later which enables you to easily push all the words from an entire wordlist or a mask very fast. At this time you can use the --hashcat-session option to do so with hashcat itself<br />
</li>
<li>You can use --brain-server-whitelist in order to force the clients to use a specific hashlist<br />
</li>
<li>The protocol used is pretty simple and does not contain hashcat specific information, which should make it possible for other cracking tools to utilize the server, too<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical details on the hashcat brain client<br />
<br />
<hr class="mycode_hr" />
<ul class="mycode_list"><li>The client calculates the hashcat brain session based on the hashlist entries, to efficiently let a high number of salts work for us. You can override the session calculated with --brain-session, which makes sense if you want to use a fast hash in order to "seed" the hashcat brain with already-tried wordlists or masks.<br />
</li>
<li>The use of --remove is forbidden, but this should not really be a problem, since the potfile will do the same for you. Make sure to remove --potfile-disable in case you use it.<br />
</li>
<li>If multiple clients use the same attack on the same hashcat brain (which is a clever idea), you end up with a distributed solution - without the need of an overlay for keyspace distribution. This is not the intended use of the hashcat brain and should not be used as it. I'll explain later.<br />
</li>
<li>Since each password candidate is creating a hash of 8 bytes, some serious network upstream traffic can be generated from your client. I'll explain later.<br />
</li>
<li>The use of xxHash as hash is not required; we can exchange it with whatever hash we want. However so far it's doing a great job.<br />
</li>
</ul>
The status view was updated to give you some real-time statistics about the network usage:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Speed.#1.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#2.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#3.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#4.........:        0 H/s (0.00ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
...<br />
Brain.Link.#1....: RX: 0 B (0.00 Mbps), TX: 4.1 MB (1.22 Mbps), sending<br />
Brain.Link.#2....: RX: 0 B (0.00 Mbps), TX: 4.7 MB (1.09 Mbps), sending<br />
Brain.Link.#3....: RX: 0 B (0.00 Mbps), TX: 3.5 MB (0.88 Mbps), sending<br />
Brain.Link.#4....: RX: 0 B (0.00 Mbps), TX: 4.1 MB (0.69 Mbps), sending</blockquote>
<br />
When the data is transferred, there's no cracking. You can see it's doing 0 H/s. But if you have a slow hash, or a fast hash with multiple salts, this time can be seen as minor overhead. The major time taken is still in the cracking phase. So if you have a fast hash, the more salts the better! As soon as hashcat is done with the network communication, it's starting to work as always:<br />
<br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Speed.#1.........:   869.1 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#2.........:   870.8 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#3.........:   876.2 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
Speed.#4.........:   872.6 MH/s (1.36ms) @ Accel:64 Loops:1 Thr:1024 Vec:1<br />
...<br />
Brain.Link.#1....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#2....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#3....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle<br />
Brain.Link.#4....: RX: 1.3 MB (0.00 Mbps), TX: 10.5 MB (0.00 Mbps), idle</blockquote>
<br />
<hr class="mycode_hr" />
<br />
The brain and the bottlenecks<br />
<br />
<hr class="mycode_hr" />
<br />
While working with Team Hashcat to test how the brain performs with large numbers of clients and over the Internet, I learned about some serious bottlenecks.<br />
<br />
The most important insight was about the performance of lookups against the brain. That should be obvious solely from the huge amount of data that we're talking about here, but the brain does not just have to look up millions of candidates against millions of existing database entries - it must also insert them into the database after each commit and ensure the ordering stays intact otherwise it would break the binary tree. This simply takes time, even if the lookup process was already threaded. But the feature was so promising that I did not want to abandon development just because of the performance challenge.<br />
<br />
But to start from the beginning, keep the following number in mind: 50kH/s<br />
<br />
This was the speed that was the maximum performance of the hashcat brain after the first development alpha was finished. In other words, if the performance of your attack was faster than this speed, the hashcat brain becomes the bottleneck. Now there's good and bad news about this:<br />
<br />
Bad: This is the total number. Which means, the entire network of all GPUs participating as clients cannot create more than 50kH/s before the bottleneck effect kicks in.<br />
<br />
Good: Salts come to the rescue. If you have a large salted hashlist - with, for example 300,000 SSHA1 hashes (as in the last Crack Me If You Can) - this means that the real maximum performance that the brain can handle jumps to 15 GH/s. (You can simply multiply the 50kH/s with the number of unique salts of your hashlist.)<br />
<br />
Then there's another bottleneck: the network bandwidth required. For those of you who plan to use the brain inside a local network with 100Mbit, you can skip this section entirely. But for those who plan to use the brain in a large group, over VPN or in general over the Internet, keep in mind that a single GPU can create around 5Mbit/s of upstream before bandwidth becomes a bottleneck. That doesn't mean that a hashcat client will stop working - it will just reduce your theoretical maximum cracking performance.<br />
<br />
Both of these lessons learned <span style="text-decoration: underline;" class="mycode_u"><span style="font-weight: bold;" class="mycode_b">lead to an upgrade</span></span> to the brain during development. (This means that everything that you've read up to this point is already outdated!) The bottlenecks still exist, but there's kind of a mitigation to them. To better understand what we mean when talking about how to mitigate the problem, we need new terminology in the hashcat brain universe - something we'll call brain client "features".<br />
<br />
When running as a client, hashcat now has a new parameter called --brain-client-features. With this parameter, you can select from two features (so far) that the client has to offer:<br />
<ul class="mycode_list"><li>Brain "Hashes" feature<br />
</li>
<li>Brain "Attacks" feature<br />
</li>
</ul>
The brain "hashes" feature is everything that we've explained from the beginning - the *low-level* function of the brain. The brain "attacks" feature is the *high-level* strategy which I added to mitigate the bottlenecks. By default, both "features" are active, and run in parallel. Depending on your use case, you can selectively enable or disable either one.<br />
<br />
The brain "attack" feature should be explained in more detail in order to understand what it is doing. It is a high-level approach, or a compressed hint. Hashcat clients request this "hint" from the brain about a given attack as soon as the client is assigned a new work package from the local hashcat dispatcher. For example, if you have a system with 4 GPUs, the local hashcat dispatcher is responsible for distributing the workload across the local GPUs. What's new is that before a GPU starts actually working on the package, it asks the brain for a high level confirmation of whether or not to proceed. The process of how this work is basically the same as with the low-level architecture: the client "reserves" a package when the hashcat brain moves it to short-term memory - and once it is done, it will be moved to long-term memory.<br />
<br />
The attack package itself is another 8-byte checksum - but that's more than enough to assign all feasible combinations of attacks a unique identifier. For example, hashcat takes options like the attack mode itself, rules with -r (but also -j and -k rules), masks, user-defined custom charset, Markov options, a checksum of the wordlists (if used) and so on. All of these options are combined in a repeatable way, and from that unique combination of options, a checksum is created that uniquely "fingerprints" all of the components of the attack.<br />
<br />
When the clients connect to the hashcat brain, they send this attack checksum (along with the session ID) to the brain, so that the brain knows precisely which attack is running on a particular hashcat client. Now, if the local dispatcher creates a new package, the local start point and end point of this attack is sent to the brain so that the brain can track it. The client will automatically reject an entire package - for example, an entire wordlist, or an entire wordlist plus a specific list of rules - if the attack has some overlaps. This is done *before* the client sends any password candidate hashes to the brain. This means that if a package is rejected:<br />
<ul class="mycode_list"><li>The client doesn't need to transfer the hashes (which mitigates the bandwidth bottleneck)<br />
</li>
<li>The brain server doesn't need to compare it (which mitigates the lookup bottleneck)<br />
</li>
</ul>
If the attack package itself is not rejected, the hashes are still sent to the brain and compared.<br />
<br />
The hashcat brain is kind of clever when it comes to the packages. It recognizes overlapping packages on a low level - in cases where only part of one package overlaps with another package. When this occurs, the brain only rejects the overlapping section of the package and informs the client about that. It is then up to the client to decide whether it wants to either launch the attack with a minimized package size, or to ask the local dispatcher for another (smaller) portion to fill the gap. Of course, this newly creates portion is also first sent to the brain, in case it can be rejected. The entire process is packed into a loop and it will repeat the process until the client decides that the package is big enough (and the default setting for accepting a package and to start executing is half of the original package size.)<br />
<br />
Something I realized - after I had already finished with the implementation of the high-level feature - was that the new brain "attack" feature is a very strong feature for standalone use. By setting --brain-client-features 2, you tell the client to only use the attack feature. This completely eliminates all bottlenecks - the network bandwidth, but even more importantly, the lookup bottleneck. The drawback is that you lose cross-attack functionality.<br />
<br />
If you think that this new feature is a nice way to get a native hashcat multi-system distribution ... you are wrong. The brain client still requires running in -S mode, which means that this is all about slow hashes or fast hashes with many salts. There's also no wordlist distribution, and most importantly, there's no distribution of cracked hashes across all network clients. So the brain "attack" feature is not meant to be an alternative to existing distribution solutions, but just as a mitigation for the bottlenecks (and it works exactly as such).<br />
<br />
<hr class="mycode_hr" />
<br />
Commandline Options<br />
<br />
<hr class="mycode_hr" />
<br />
Most of the commands are self-explaining. I'm just adding them here to inform you which ones exist:<br />
<ul class="mycode_list"><li>Add new option --brain-server to start a hashcat brain server<br />
</li>
<li>Add new option --brain-client to start a hashcat brain client, automatically activates --slow-candidates<br />
</li>
<li>Add new option --brain-host and --brain-port to specify ip and port of brain server, both listening and connecting<br />
</li>
<li>Add new option --brain-session to override automatically calculated brain session ID<br />
</li>
<li>Add new option --brain-session-whitelist to allow only explicit written session ID on brain server<br />
</li>
<li>Add new option --brain-password to specify the brain server authentication password<br />
</li>
<li>Add new option --brain-client-features which allows enable and disable certain features of the hashcat brain<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Final words<br />
<br />
<hr class="mycode_hr" />
<br />
The hashcat brain development is now in its second month, and from what I've seen so far, it works well enough to release it. In all that time, I've also improved the maximum performance of the low-level hash lookup from 50kH/s to roughly 650kH/s (depending on which type of hardware the brain server is running on).<br />
<br />
Of course, it's possible that there are some bugs that I did not detect while developing this monster, and it's simply too much code to verify all of the different possible behavior in different situations. Please let me know if you find any strange behavior, and I'll try to fix is as quickly as possible.<br />
<br />
This feature, in my opinion, will significantly alter the workflow of someone who is doing serious cracking on a daily basis. It's not just the time saving effect - it's mostly the confidence in your own work. This confidence is fed by two factors: first, that you know the brain will rule out duplicated work (when you simply didn't have enough time to track all of the details when running different attacks), and second, that you get immediate visible feedback when your attacks overlap and you're duplicating work. When you see that a given attack is producing a 20% or higher reject rate, it will give you a better understanding of what type of work is actually being performed by your hardware. This gives you deeper insight, and the chance to update and improve your own attack strategies at a high level.<br />
<br />
-- atom<br />
<br />
PS: If you build from sources, do not forget to run "git submodule update --init" to get the xxHash headers]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v4.2.0]]></title>
			<link>https://hashcat.net/forum/thread-7711.html</link>
			<pubDate>Thu, 02 Aug 2018 20:24:13 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-7711.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v4.2.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added hash-mode 16700 = FileVault 2<br />
</li>
<li>Added hash-mode 16800 = WPA-PMKID-PBKDF2<br />
</li>
<li>Added hash-mode 16801 = WPA-PMKID-PMK<br />
</li>
<li>Added hash-mode 16900 = Ansible Vault<br />
</li>
</ul>
<br />
Thanks to @hops_ch for contributing the Ansible Vault mode!<br />
<br />
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Added JtR-compatible support for hex notation in the rules engine<br />
</li>
<li>Added OpenCL device utilization to the status information in machine-readable output<br />
</li>
<li>Added missing NV Tesla and Titan GPU details to tuning database<br />
</li>
<li>General file handling: Abort if a byte-order mark (BOM) is detected in a wordlist, hashlist, maskfile or rulefile<br />
</li>
<li>HCCAPX management: Use advanced hints in message_pair stored by hcxtools about endian bitness of replay counter<br />
</li>
<li>OpenCL kernels: Abort session if kernel self-test fails<br />
</li>
<li>OpenCL kernels: Add '-pure' prefix to kernel filenames to avoid problems caused by reusing existing hashcat installation folder<br />
</li>
<li>OpenCL kernels: Removed the use of 'volatile' keyword in inline assembly instructions where it is not needed<br />
</li>
<li>OpenCL kernels: Switched array pointer types in function declarations in order to be compatible with OpenCL 2.0<br />
</li>
<li>Refactored code for --progress-only and --speed-only calculation<br />
</li>
<li>SIP cracking: Increased the nonce field to allow a salt of 1024 bytes<br />
</li>
<li>TrueCrypt/VeraCrypt cracking: Do an entropy check on the TC/VC header on start<br />
</li>
</ul>
Notes:<br />
<ul class="mycode_list"><li>The removal of 'volatile' keyword has a large positive impact on cracking performance on macOS<br />
</li>
<li>The refactored code for --progress-only is important if hashcat is used in combination with a distributed overlay such as hashtopolis<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a function declaration attribute in -m 8900 kernel that led to unusable -m 9300 (which shares kernel code with -m 8900)<br />
</li>
<li>Fixed a miscalculation in --progress-only mode output for extremely slow kernels like -m 14800<br />
</li>
<li>Fixed a missing check for errors on OpenCL devices leading to invalid removal of the restore file<br />
</li>
<li>Fixed a missing kernel in -m 5600 in combination with -a 3 and -O if mask is &gt;= 16 characters<br />
</li>
<li>Fixed detection of AMD_GCN version when the rocm driver is used<br />
</li>
<li>Fixed missing code section in -m 2500 and -m 2501 to crack corrupted handshakes with a LE endian bitness base<br />
</li>
<li>Fixed a missing check for hashmodes using OPTS_TYPE_PT_UPPER causing the self-test to fail when using combinator and hybrid modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v4.2.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms and fixing bugs:<br />
<ul class="mycode_list"><li>Added hash-mode 16700 = FileVault 2<br />
</li>
<li>Added hash-mode 16800 = WPA-PMKID-PBKDF2<br />
</li>
<li>Added hash-mode 16801 = WPA-PMKID-PMK<br />
</li>
<li>Added hash-mode 16900 = Ansible Vault<br />
</li>
</ul>
<br />
Thanks to @hops_ch for contributing the Ansible Vault mode!<br />
<br />
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Added JtR-compatible support for hex notation in the rules engine<br />
</li>
<li>Added OpenCL device utilization to the status information in machine-readable output<br />
</li>
<li>Added missing NV Tesla and Titan GPU details to tuning database<br />
</li>
<li>General file handling: Abort if a byte-order mark (BOM) is detected in a wordlist, hashlist, maskfile or rulefile<br />
</li>
<li>HCCAPX management: Use advanced hints in message_pair stored by hcxtools about endian bitness of replay counter<br />
</li>
<li>OpenCL kernels: Abort session if kernel self-test fails<br />
</li>
<li>OpenCL kernels: Add '-pure' prefix to kernel filenames to avoid problems caused by reusing existing hashcat installation folder<br />
</li>
<li>OpenCL kernels: Removed the use of 'volatile' keyword in inline assembly instructions where it is not needed<br />
</li>
<li>OpenCL kernels: Switched array pointer types in function declarations in order to be compatible with OpenCL 2.0<br />
</li>
<li>Refactored code for --progress-only and --speed-only calculation<br />
</li>
<li>SIP cracking: Increased the nonce field to allow a salt of 1024 bytes<br />
</li>
<li>TrueCrypt/VeraCrypt cracking: Do an entropy check on the TC/VC header on start<br />
</li>
</ul>
Notes:<br />
<ul class="mycode_list"><li>The removal of 'volatile' keyword has a large positive impact on cracking performance on macOS<br />
</li>
<li>The refactored code for --progress-only is important if hashcat is used in combination with a distributed overlay such as hashtopolis<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a function declaration attribute in -m 8900 kernel that led to unusable -m 9300 (which shares kernel code with -m 8900)<br />
</li>
<li>Fixed a miscalculation in --progress-only mode output for extremely slow kernels like -m 14800<br />
</li>
<li>Fixed a missing check for errors on OpenCL devices leading to invalid removal of the restore file<br />
</li>
<li>Fixed a missing kernel in -m 5600 in combination with -a 3 and -O if mask is &gt;= 16 characters<br />
</li>
<li>Fixed detection of AMD_GCN version when the rocm driver is used<br />
</li>
<li>Fixed missing code section in -m 2500 and -m 2501 to crack corrupted handshakes with a LE endian bitness base<br />
</li>
<li>Fixed a missing check for hashmodes using OPTS_TYPE_PT_UPPER causing the self-test to fail when using combinator and hybrid modes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v4.1.0]]></title>
			<link>https://hashcat.net/forum/thread-7317.html</link>
			<pubDate>Wed, 21 Feb 2018 12:28:07 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-7317.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v4.1.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 16000 = Tripcode<br />
</li>
<li>Added hash-mode 16100 = TACACS+<br />
</li>
<li>Added hash-mode 16200 = Apple Secure Notes<br />
</li>
<li>Added hash-mode 16300 = Ethereum Pre-Sale Wallet, PBKDF2-SHA256<br />
</li>
<li>Added hash-mode 16400 = CRAM-MD5 Dovecot<br />
</li>
<li>Added hash-mode 16500 = JWT (JSON Web Token)<br />
</li>
<li>Added hash-mode 16600 = Electrum Wallet (Salt-Type 1-3)<br />
</li>
</ul>
Some special note on cracking TACACS+: <a href="https://hashcat.net/forum/thread-7062.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-7062.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
But there are also some deep changes related to performance:<br />
<ul class="mycode_list"><li>A new technique to reduce PCIe transfer time by using so-called "compression" kernels<br />
</li>
<li>The OpenCL kernel thread management was refactored, giving a strong boost on PBKDF2 based kernels (WPA, etc)<br />
</li>
<li>Improved autotune support<br />
</li>
<li>Improved OpenCL JiT compiler settings<br />
</li>
<li>Workaround for some bad OpenCL runtime settings on macOS<br />
</li>
</ul>
Technical details on the new compression kernels: <a href="https://hashcat.net/forum/thread-7267.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-7267.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
Full benchmark comparison from v4.0.1 to v4.1.0 for selected (most common) algorithms: <br />
<br />
<a href="https://docs.google.com/spreadsheets/d/1upyyRCEpnfmpv5QTMW0UDlXUgLuy5zlGsgDD_XykXTE/edit?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...sp=sharing</a><br />
<br />
Both NVIDIA and AMD users will see performance improvements in almost all hash modes and in all attack modes.<br />
<br />
We've also spend some time into CPU performance improvements. See the tabs for Intel I7 and AMD Ryzen for details.<br />
<br />
<hr class="mycode_hr" />
<br />
New Features:<br />
<ul class="mycode_list"><li>Added option --benchmark-all to benchmark all hash-modes (not just the default selection)<br />
</li>
<li>Removed option --gpu-temp-retain that tried to retain GPU temperature at X degrees celsius - please use driver-specific tools<br />
</li>
<li>Removed option --powertune-enable to enable power tuning - please use driver specific tools<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>OpenCL Kernels: Add a decompressing kernel and a compressing host code in order to reduce PCIe transfer time<br />
</li>
<li>OpenCL Kernels: Improve performance preview accuracy in --benchmark, --speed-only and --progress-only mode<br />
</li>
<li>OpenCL Kernels: Remove password length restriction of 16 for Cisco-PIX and Cisco-ASA hashes<br />
</li>
<li>Terminal: Display set cost/rounds during benchmarking<br />
</li>
<li>Terminal: Show [r]esume in prompt only in pause mode, and show [p]ause in prompt only in resume mode<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a configuration setting for -m 400 in pure kernel mode which said it was capable of doing SIMD when it is not<br />
</li>
<li>Fixed a hash parsing problem for 7-Zip hashes: allow a longer CRC32 data length field within the hash format<br />
</li>
<li>Fixed a hash parsing problem when using --show/--left with hashes with long salts that required pure kernels<br />
</li>
<li>Fixed a logic error in storing temporary progress for slow hashes, leading to invalid speeds in status view<br />
</li>
<li>Fixed a mask-length check issue: return -1 in case the mask length is not within the password-length range<br />
</li>
<li>Fixed a missing check for return code in case hashcat.hcstat2 was not found<br />
</li>
<li>Fixed a race condition in combinator- and hybrid-mode where the same scratch buffer was used by multiple threads<br />
</li>
<li>Fixed a restore issue leading to "Restore value is greater than keyspace" when mask files or wordlist folders were used<br />
</li>
<li>Fixed a uninitialized value in OpenCL kernels 9720, 9820 and 10420 leading to absurd benchmark performance<br />
</li>
<li>Fixed the maximum password length check in password-reassembling function<br />
</li>
<li>Fixed the output of --show when &#36;HEX[] passwords were present within the potfile<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Autotune: Improve autotune engine logic and synchronize results on same OpenCL devices<br />
</li>
<li>Documents: Added docs/limits.txt<br />
</li>
<li>Files: Copy include/ folder and its content when SHARED is set to 1 in Makefile<br />
</li>
<li>Files: Switched back to relative current working directory on windows to work around problems with Unicode characters<br />
</li>
<li>Hashcat Context: Fixed a memory leak in shutdown phase<br />
</li>
<li>Hash Parser: Changed the way large strings are handled/truncated within the event buffer if they are too large to fit<br />
</li>
<li>Hash Parser: Fixed a memory leak in shutdown phase<br />
</li>
<li>Hash Parser: Fixed the use of strtok_r () calls<br />
</li>
<li>OpenCL Devices: Fixed several memory leaks in shutdown phase<br />
</li>
<li>OpenCL Kernels: Add general function declaration keyword (inline) and some OpenCL runtime specific exceptions for NV and CPU devices<br />
</li>
<li>OpenCL Kernels: Replace variables from uXX to uXXa if used in __constant space<br />
</li>
<li>OpenCL Kernels: Use a special kernel to initialize the password buffer used during autotune measurements, to reduce startup time<br />
</li>
<li>OpenCL Kernels: Refactored kernel thread management from native to maximum per kernel<br />
</li>
<li>OpenCL Kernels: Use three separate comparison kernels (depending on keyver) for WPA instead of one<br />
</li>
<li>OpenCL Runtime: Add current timestamp to OpenCL kernel source in order to force OpenCL JiT compiler to not use the cache<br />
</li>
<li>OpenCL Runtime: Enforce use of OpenCL version 1.2 to restrain OpenCL runtimes to make use of the __generic address space qualifier<br />
</li>
<li>OpenCL Runtime: Updated rocm detection<br />
</li>
<li>Returncode: Enforce return code 0 when the user selects --speed-only or --progress-only and no other error occurs<br />
</li>
<li>Rules: Fixed some default rule-files after changing rule meaning of 'x' to 'O'<br />
</li>
<li>Self Test: Skip self-test for mode 8900 - user-configurable scrypt settings are incompatible with fixed settings in the self-test hash<br />
</li>
<li>Self Test: Skip self-test for mode 15700 because the settings are too high and cause startup times that are too long<br />
</li>
<li>Terminal: Add workitem settings to status display (can be handy for debugging)<br />
</li>
<li>Terminal: Send clear-line code to the same output stream as the message immediately following<br />
</li>
<li>Timer: Switch from gettimeofday() to clock_gettime() to work around problems on cygwin<br />
</li>
<li>User Options: According to getopts manpage, the last element of the option array has to be filled with zeros<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v4.1.0! <br />
<br />
Download binaries or sources: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a> <br />
<br />
<hr class="mycode_hr" />
<br />
This release is mostly about expanding support for new algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 16000 = Tripcode<br />
</li>
<li>Added hash-mode 16100 = TACACS+<br />
</li>
<li>Added hash-mode 16200 = Apple Secure Notes<br />
</li>
<li>Added hash-mode 16300 = Ethereum Pre-Sale Wallet, PBKDF2-SHA256<br />
</li>
<li>Added hash-mode 16400 = CRAM-MD5 Dovecot<br />
</li>
<li>Added hash-mode 16500 = JWT (JSON Web Token)<br />
</li>
<li>Added hash-mode 16600 = Electrum Wallet (Salt-Type 1-3)<br />
</li>
</ul>
Some special note on cracking TACACS+: <a href="https://hashcat.net/forum/thread-7062.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-7062.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
But there are also some deep changes related to performance:<br />
<ul class="mycode_list"><li>A new technique to reduce PCIe transfer time by using so-called "compression" kernels<br />
</li>
<li>The OpenCL kernel thread management was refactored, giving a strong boost on PBKDF2 based kernels (WPA, etc)<br />
</li>
<li>Improved autotune support<br />
</li>
<li>Improved OpenCL JiT compiler settings<br />
</li>
<li>Workaround for some bad OpenCL runtime settings on macOS<br />
</li>
</ul>
Technical details on the new compression kernels: <a href="https://hashcat.net/forum/thread-7267.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-7267.html</a><br />
<br />
<hr class="mycode_hr" />
<br />
Full benchmark comparison from v4.0.1 to v4.1.0 for selected (most common) algorithms: <br />
<br />
<a href="https://docs.google.com/spreadsheets/d/1upyyRCEpnfmpv5QTMW0UDlXUgLuy5zlGsgDD_XykXTE/edit?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...sp=sharing</a><br />
<br />
Both NVIDIA and AMD users will see performance improvements in almost all hash modes and in all attack modes.<br />
<br />
We've also spend some time into CPU performance improvements. See the tabs for Intel I7 and AMD Ryzen for details.<br />
<br />
<hr class="mycode_hr" />
<br />
New Features:<br />
<ul class="mycode_list"><li>Added option --benchmark-all to benchmark all hash-modes (not just the default selection)<br />
</li>
<li>Removed option --gpu-temp-retain that tried to retain GPU temperature at X degrees celsius - please use driver-specific tools<br />
</li>
<li>Removed option --powertune-enable to enable power tuning - please use driver specific tools<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>OpenCL Kernels: Add a decompressing kernel and a compressing host code in order to reduce PCIe transfer time<br />
</li>
<li>OpenCL Kernels: Improve performance preview accuracy in --benchmark, --speed-only and --progress-only mode<br />
</li>
<li>OpenCL Kernels: Remove password length restriction of 16 for Cisco-PIX and Cisco-ASA hashes<br />
</li>
<li>Terminal: Display set cost/rounds during benchmarking<br />
</li>
<li>Terminal: Show [r]esume in prompt only in pause mode, and show [p]ause in prompt only in resume mode<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a configuration setting for -m 400 in pure kernel mode which said it was capable of doing SIMD when it is not<br />
</li>
<li>Fixed a hash parsing problem for 7-Zip hashes: allow a longer CRC32 data length field within the hash format<br />
</li>
<li>Fixed a hash parsing problem when using --show/--left with hashes with long salts that required pure kernels<br />
</li>
<li>Fixed a logic error in storing temporary progress for slow hashes, leading to invalid speeds in status view<br />
</li>
<li>Fixed a mask-length check issue: return -1 in case the mask length is not within the password-length range<br />
</li>
<li>Fixed a missing check for return code in case hashcat.hcstat2 was not found<br />
</li>
<li>Fixed a race condition in combinator- and hybrid-mode where the same scratch buffer was used by multiple threads<br />
</li>
<li>Fixed a restore issue leading to "Restore value is greater than keyspace" when mask files or wordlist folders were used<br />
</li>
<li>Fixed a uninitialized value in OpenCL kernels 9720, 9820 and 10420 leading to absurd benchmark performance<br />
</li>
<li>Fixed the maximum password length check in password-reassembling function<br />
</li>
<li>Fixed the output of --show when &#36;HEX[] passwords were present within the potfile<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Autotune: Improve autotune engine logic and synchronize results on same OpenCL devices<br />
</li>
<li>Documents: Added docs/limits.txt<br />
</li>
<li>Files: Copy include/ folder and its content when SHARED is set to 1 in Makefile<br />
</li>
<li>Files: Switched back to relative current working directory on windows to work around problems with Unicode characters<br />
</li>
<li>Hashcat Context: Fixed a memory leak in shutdown phase<br />
</li>
<li>Hash Parser: Changed the way large strings are handled/truncated within the event buffer if they are too large to fit<br />
</li>
<li>Hash Parser: Fixed a memory leak in shutdown phase<br />
</li>
<li>Hash Parser: Fixed the use of strtok_r () calls<br />
</li>
<li>OpenCL Devices: Fixed several memory leaks in shutdown phase<br />
</li>
<li>OpenCL Kernels: Add general function declaration keyword (inline) and some OpenCL runtime specific exceptions for NV and CPU devices<br />
</li>
<li>OpenCL Kernels: Replace variables from uXX to uXXa if used in __constant space<br />
</li>
<li>OpenCL Kernels: Use a special kernel to initialize the password buffer used during autotune measurements, to reduce startup time<br />
</li>
<li>OpenCL Kernels: Refactored kernel thread management from native to maximum per kernel<br />
</li>
<li>OpenCL Kernels: Use three separate comparison kernels (depending on keyver) for WPA instead of one<br />
</li>
<li>OpenCL Runtime: Add current timestamp to OpenCL kernel source in order to force OpenCL JiT compiler to not use the cache<br />
</li>
<li>OpenCL Runtime: Enforce use of OpenCL version 1.2 to restrain OpenCL runtimes to make use of the __generic address space qualifier<br />
</li>
<li>OpenCL Runtime: Updated rocm detection<br />
</li>
<li>Returncode: Enforce return code 0 when the user selects --speed-only or --progress-only and no other error occurs<br />
</li>
<li>Rules: Fixed some default rule-files after changing rule meaning of 'x' to 'O'<br />
</li>
<li>Self Test: Skip self-test for mode 8900 - user-configurable scrypt settings are incompatible with fixed settings in the self-test hash<br />
</li>
<li>Self Test: Skip self-test for mode 15700 because the settings are too high and cause startup times that are too long<br />
</li>
<li>Terminal: Add workitem settings to status display (can be handy for debugging)<br />
</li>
<li>Terminal: Send clear-line code to the same output stream as the message immediately following<br />
</li>
<li>Timer: Switch from gettimeofday() to clock_gettime() to work around problems on cygwin<br />
</li>
<li>User Options: According to getopts manpage, the last element of the option array has to be filled with zeros<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v4.0.0]]></title>
			<link>https://hashcat.net/forum/thread-6965.html</link>
			<pubDate>Fri, 27 Oct 2017 15:17:32 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6965.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat 4.0.0 release!<br />
<br />
<hr class="mycode_hr" />
<br />
This release deserved the 4.x.x major version increase because of a new major feature:<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added support to crack passwords and salts up to length 256</span><br />
<br />
Internally, this change took a lot of effort - many months of work. The first step was to add an OpenSSL-style low-level hash interface with the typical HashInit(), HashUpdate() and HashFinal() functions. After that, every OpenCL kernel had to be rewritten from scratch using those functions. Adding the OpenSSL-style low-level hash functions also had the advantage that you can now add new kernels more easily to hashcat - but the disadvantage is that such kernels are slower than hand-optimized kernels.<br />
<br />
The OpenCL kernels from 3.6.0 were all hand-optimized for performance. No worries - these kernels still exist, and can be explicitly requested with the new -O (optimized kernel) option. This configures hashcat to use the optimized OpenCL kernels, but at the cost of limited password length support (typically 32).<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added self-test functionality to detect broken OpenCL runtimes on startup</span><br />
<br />
Another important missing feature in the previous hashcat version was the self-test on startup. Some (mostly older) OpenCL runtimes were somewhat buggy (thanks to NV and AMD) in ways that created non-working kernels. The problem was that the user didn't get any error message that clarified the reason for the problems. With this version, hashcat tries to crack a known hash on startup with a known password. Failing to crack a simple known hash is a bulletproof way to test whether your system is set up correctly.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added hash-mode 2501 = WPA/WPA2 PMK</span><br />
<br />
This mode was added to run precomputed PMK lists against a hccapx, like cowpatty did (genpmk). You still have to precompute the PMK. Please use wlangenpmk/wlangenpmkocl from hcxtools to do so.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Improved macOS support</span><br />
<br />
The evil "abort trap 6" error is now handled in a different way. There is no more need to maintain many different OpenCL devices in the hashcat.hctune database.<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Added support to crack passwords and salts up to length 256<br />
</li>
<li>Added option --optimized-kernel-enable to use faster kernels but limit the maximum supported password- and salt-length<br />
</li>
<li>Added self-test functionality to detect broken OpenCL runtimes on startup<br />
</li>
<li>Added option --self-test-disable to disable self-test functionality on startup<br />
</li>
<li>Added option --wordlist-autohex-disable to disable the automatical conversion of &#36;HEX[] words from the word list<br />
</li>
<li>Added option --example-hashes to show an example hash for each hash-mode<br />
</li>
<li>Removed option --weak-hash-check (zero-length password check) to increase startup time, it also causes many Trap 6 error on macOS<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 2500 = WPA/WPA2 (SHA256-AES-CMAC)<br />
</li>
<li>Added hash-mode 2501 = WPA/WPA2 PMK<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Bugs:<br />
<ul class="mycode_list"><li>Fixed a buffer overflow in mangle_dupechar_last function<br />
</li>
<li>Fixed a calculation error in get_power() leading to errors of type "BUG pw_add()!!"<br />
</li>
<li>Fixed a memory problem that occured when the OpenCL folder was not found and e.g. the shared and session folder were the same<br />
</li>
<li>Fixed a missing barrier() call in the RACF OpenCL kernel<br />
</li>
<li>Fixed a missing salt length value in benchmark mode for SIP<br />
</li>
<li>Fixed an integer overflow in hash buffer size calculation<br />
</li>
<li>Fixed an integer overflow in innerloop_step and innerloop_cnt variables<br />
</li>
<li>Fixed an integer overflow in masks not skipped when loaded from file<br />
</li>
<li>Fixed an invalid optimization code in kernel 7700 depending on the input hash, causing the kernel to loop forever<br />
</li>
<li>Fixed an invalid progress value in status view if words from the base wordlist get rejected because of length<br />
</li>
<li>Fixed a parser error for mode -m 9820 = MS Office &lt;= 2003 &#36;3, SHA1 + RC4, collider #2<br />
</li>
<li>Fixed a parser error in multiple modes not checking for return code, resulting in negative memory index writes<br />
</li>
<li>Fixed a problem with changed current working directory, for instance by using --restore together with --remove<br />
</li>
<li>Fixed a problem with the conversion to the &#36;HEX[] format: convert/hexify also all passwords of the format &#36;HEX[]<br />
</li>
<li>Fixed the calculation of device_name_chksum; should be done for each iteration<br />
</li>
<li>Fixed the dictstat lookup if nanoseconds are used in timestamps for the cached files<br />
</li>
<li>Fixed the estimated time value whenever the value is very large and overflows<br />
</li>
<li>Fixed the output of --show when used together with the collider modes -m 9710, 9810 or 10410<br />
</li>
<li>Fixed the parsing of command line options. It doesn't show two times the same error about an invalid option anymore<br />
</li>
<li>Fixed the parsing of DCC2 hashes by allowing the "#" character within the user name<br />
</li>
<li>Fixed the parsing of descrypt hashes if the hashes do have non-standard characters within the salt<br />
</li>
<li>Fixed the use of --veracrypt-pim option. It was completely ignored without showing an error<br />
</li>
<li>Fixed the version number used in the restore file header<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Autotune: Do a pre-autotune test run to find out if kernel runtime is above some TDR limit<br />
</li>
<li>Charset: Add additional DES charsets with corrected parity<br />
</li>
<li>OpenCL Buffers: Do not allocate memory for amplifiers for fast hashes, it's simply not needed<br />
</li>
<li>OpenCL Kernels: Improved performance of SHA-3 Kernel (keccak) by hardcoding the 0x80 stopbit<br />
</li>
<li>OpenCL Kernels: Improved rule engine performance by 6% on for NVidia<br />
</li>
<li>OpenCL Kernels: Move from ld.global.v4.u32 to ld.const.v4.u32 in _a3 kernels<br />
</li>
<li>OpenCL Kernels: Replace bitwise swaps with rotate() versions for AMD<br />
</li>
<li>OpenCL Kernels: Rewritten Keccak kernel to run fully on registers and partially reversed last round<br />
</li>
<li>OpenCL Kernels: Rewritten SIP kernel from scratch<br />
</li>
<li>OpenCL Kernels: Thread-count is set to hardware native count except if -w 4 is used then OpenCL maximum is used<br />
</li>
<li>OpenCL Kernels: Updated default scrypt TMTO to be ideal for latest NVidia and AMD top models<br />
</li>
<li>OpenCL Kernels: Vectorized tons of slow kernels to improve CPU cracking speed<br />
</li>
<li>OpenCL Runtime: Improved detection for AMD and NV devices on macOS<br />
</li>
<li>OpenCL Runtime: Improved performance on Intel MIC devices (Xeon PHI) on runtime level (300MH/s to 2000MH/s)<br />
</li>
<li>OpenCL Runtime: Updated AMD ROCm driver version check, warn if version &lt; 1.1<br />
</li>
<li>Show cracks: Improved the performance of --show/--left if used together with --username<br />
</li>
<li>Startup: Add visual indicator of active options when benchmarking<br />
</li>
<li>Startup: Check and abort session if outfile and wordlist point to the same file<br />
</li>
<li>Startup: Show some attack-specific optimizer constraints on start, eg: minimum and maximum support password- and salt-length<br />
</li>
<li>WPA cracking: Improved nonce-error-corrections mode to use a both positive and negative corrections<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>General: Update C standard from c99 to gnu99<br />
</li>
<li>Hash Parser: Improved salt-length checks for generic hash modes<br />
</li>
<li>HCdict File: Renamed file from hashcat.hcdict to hashcat.hcdict2 and add header because versions are incompatible<br />
</li>
<li>HCstat File: Add code to read LZMA compressed hashcat.hcstat2<br />
</li>
<li>HCstat File: Add hcstat2 support to enable masks of length up to 256, also adds a filetype header<br />
</li>
<li>HCstat File: Renamed file from hashcat.hcstat to hashcat.hcstat2 and add header because versions are incompatible<br />
</li>
<li>HCtune File: Remove apple related GPU entries to workaround Trap 6 error<br />
</li>
<li>OpenCL Kernels: Added code generator for most of the switch_* functions and replaced existing code<br />
</li>
<li>OpenCL Kernels: Declared all include functions as static to reduce binary kernel cache size<br />
</li>
<li>OpenCL Kernels: On AMD GPU, optimized kernels for use with AMD ROCm driver<br />
</li>
<li>OpenCL Kernels: Removed some include functions that are no longer needed to reduce compile time<br />
</li>
<li>OpenCL Runtime: Fall back to 64 threads default (from 256) on AMD GPU to prevent creating too many workitems<br />
</li>
<li>OpenCL Runtime: Forcing OpenCL 1.2 no longer needed. Option removed from build options<br />
</li>
<li>OpenCL Runtime: On AMD GPU, recommend AMD ROCm driver for Linux<br />
</li>
<li>Restore: Fixed the version number used in the restore file header<br />
</li>
<li>Time: added new type for time measurements hc_time_t and related functions to force the use of 64 bit times<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat 4.0.0 release!<br />
<br />
<hr class="mycode_hr" />
<br />
This release deserved the 4.x.x major version increase because of a new major feature:<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added support to crack passwords and salts up to length 256</span><br />
<br />
Internally, this change took a lot of effort - many months of work. The first step was to add an OpenSSL-style low-level hash interface with the typical HashInit(), HashUpdate() and HashFinal() functions. After that, every OpenCL kernel had to be rewritten from scratch using those functions. Adding the OpenSSL-style low-level hash functions also had the advantage that you can now add new kernels more easily to hashcat - but the disadvantage is that such kernels are slower than hand-optimized kernels.<br />
<br />
The OpenCL kernels from 3.6.0 were all hand-optimized for performance. No worries - these kernels still exist, and can be explicitly requested with the new -O (optimized kernel) option. This configures hashcat to use the optimized OpenCL kernels, but at the cost of limited password length support (typically 32).<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added self-test functionality to detect broken OpenCL runtimes on startup</span><br />
<br />
Another important missing feature in the previous hashcat version was the self-test on startup. Some (mostly older) OpenCL runtimes were somewhat buggy (thanks to NV and AMD) in ways that created non-working kernels. The problem was that the user didn't get any error message that clarified the reason for the problems. With this version, hashcat tries to crack a known hash on startup with a known password. Failing to crack a simple known hash is a bulletproof way to test whether your system is set up correctly.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Added hash-mode 2501 = WPA/WPA2 PMK</span><br />
<br />
This mode was added to run precomputed PMK lists against a hccapx, like cowpatty did (genpmk). You still have to precompute the PMK. Please use wlangenpmk/wlangenpmkocl from hcxtools to do so.<br />
<br />
<span style="font-weight: bold;" class="mycode_b">Improved macOS support</span><br />
<br />
The evil "abort trap 6" error is now handled in a different way. There is no more need to maintain many different OpenCL devices in the hashcat.hctune database.<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Added support to crack passwords and salts up to length 256<br />
</li>
<li>Added option --optimized-kernel-enable to use faster kernels but limit the maximum supported password- and salt-length<br />
</li>
<li>Added self-test functionality to detect broken OpenCL runtimes on startup<br />
</li>
<li>Added option --self-test-disable to disable self-test functionality on startup<br />
</li>
<li>Added option --wordlist-autohex-disable to disable the automatical conversion of &#36;HEX[] words from the word list<br />
</li>
<li>Added option --example-hashes to show an example hash for each hash-mode<br />
</li>
<li>Removed option --weak-hash-check (zero-length password check) to increase startup time, it also causes many Trap 6 error on macOS<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 2500 = WPA/WPA2 (SHA256-AES-CMAC)<br />
</li>
<li>Added hash-mode 2501 = WPA/WPA2 PMK<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Bugs:<br />
<ul class="mycode_list"><li>Fixed a buffer overflow in mangle_dupechar_last function<br />
</li>
<li>Fixed a calculation error in get_power() leading to errors of type "BUG pw_add()!!"<br />
</li>
<li>Fixed a memory problem that occured when the OpenCL folder was not found and e.g. the shared and session folder were the same<br />
</li>
<li>Fixed a missing barrier() call in the RACF OpenCL kernel<br />
</li>
<li>Fixed a missing salt length value in benchmark mode for SIP<br />
</li>
<li>Fixed an integer overflow in hash buffer size calculation<br />
</li>
<li>Fixed an integer overflow in innerloop_step and innerloop_cnt variables<br />
</li>
<li>Fixed an integer overflow in masks not skipped when loaded from file<br />
</li>
<li>Fixed an invalid optimization code in kernel 7700 depending on the input hash, causing the kernel to loop forever<br />
</li>
<li>Fixed an invalid progress value in status view if words from the base wordlist get rejected because of length<br />
</li>
<li>Fixed a parser error for mode -m 9820 = MS Office &lt;= 2003 &#36;3, SHA1 + RC4, collider #2<br />
</li>
<li>Fixed a parser error in multiple modes not checking for return code, resulting in negative memory index writes<br />
</li>
<li>Fixed a problem with changed current working directory, for instance by using --restore together with --remove<br />
</li>
<li>Fixed a problem with the conversion to the &#36;HEX[] format: convert/hexify also all passwords of the format &#36;HEX[]<br />
</li>
<li>Fixed the calculation of device_name_chksum; should be done for each iteration<br />
</li>
<li>Fixed the dictstat lookup if nanoseconds are used in timestamps for the cached files<br />
</li>
<li>Fixed the estimated time value whenever the value is very large and overflows<br />
</li>
<li>Fixed the output of --show when used together with the collider modes -m 9710, 9810 or 10410<br />
</li>
<li>Fixed the parsing of command line options. It doesn't show two times the same error about an invalid option anymore<br />
</li>
<li>Fixed the parsing of DCC2 hashes by allowing the "#" character within the user name<br />
</li>
<li>Fixed the parsing of descrypt hashes if the hashes do have non-standard characters within the salt<br />
</li>
<li>Fixed the use of --veracrypt-pim option. It was completely ignored without showing an error<br />
</li>
<li>Fixed the version number used in the restore file header<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Autotune: Do a pre-autotune test run to find out if kernel runtime is above some TDR limit<br />
</li>
<li>Charset: Add additional DES charsets with corrected parity<br />
</li>
<li>OpenCL Buffers: Do not allocate memory for amplifiers for fast hashes, it's simply not needed<br />
</li>
<li>OpenCL Kernels: Improved performance of SHA-3 Kernel (keccak) by hardcoding the 0x80 stopbit<br />
</li>
<li>OpenCL Kernels: Improved rule engine performance by 6% on for NVidia<br />
</li>
<li>OpenCL Kernels: Move from ld.global.v4.u32 to ld.const.v4.u32 in _a3 kernels<br />
</li>
<li>OpenCL Kernels: Replace bitwise swaps with rotate() versions for AMD<br />
</li>
<li>OpenCL Kernels: Rewritten Keccak kernel to run fully on registers and partially reversed last round<br />
</li>
<li>OpenCL Kernels: Rewritten SIP kernel from scratch<br />
</li>
<li>OpenCL Kernels: Thread-count is set to hardware native count except if -w 4 is used then OpenCL maximum is used<br />
</li>
<li>OpenCL Kernels: Updated default scrypt TMTO to be ideal for latest NVidia and AMD top models<br />
</li>
<li>OpenCL Kernels: Vectorized tons of slow kernels to improve CPU cracking speed<br />
</li>
<li>OpenCL Runtime: Improved detection for AMD and NV devices on macOS<br />
</li>
<li>OpenCL Runtime: Improved performance on Intel MIC devices (Xeon PHI) on runtime level (300MH/s to 2000MH/s)<br />
</li>
<li>OpenCL Runtime: Updated AMD ROCm driver version check, warn if version &lt; 1.1<br />
</li>
<li>Show cracks: Improved the performance of --show/--left if used together with --username<br />
</li>
<li>Startup: Add visual indicator of active options when benchmarking<br />
</li>
<li>Startup: Check and abort session if outfile and wordlist point to the same file<br />
</li>
<li>Startup: Show some attack-specific optimizer constraints on start, eg: minimum and maximum support password- and salt-length<br />
</li>
<li>WPA cracking: Improved nonce-error-corrections mode to use a both positive and negative corrections<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>General: Update C standard from c99 to gnu99<br />
</li>
<li>Hash Parser: Improved salt-length checks for generic hash modes<br />
</li>
<li>HCdict File: Renamed file from hashcat.hcdict to hashcat.hcdict2 and add header because versions are incompatible<br />
</li>
<li>HCstat File: Add code to read LZMA compressed hashcat.hcstat2<br />
</li>
<li>HCstat File: Add hcstat2 support to enable masks of length up to 256, also adds a filetype header<br />
</li>
<li>HCstat File: Renamed file from hashcat.hcstat to hashcat.hcstat2 and add header because versions are incompatible<br />
</li>
<li>HCtune File: Remove apple related GPU entries to workaround Trap 6 error<br />
</li>
<li>OpenCL Kernels: Added code generator for most of the switch_* functions and replaced existing code<br />
</li>
<li>OpenCL Kernels: Declared all include functions as static to reduce binary kernel cache size<br />
</li>
<li>OpenCL Kernels: On AMD GPU, optimized kernels for use with AMD ROCm driver<br />
</li>
<li>OpenCL Kernels: Removed some include functions that are no longer needed to reduce compile time<br />
</li>
<li>OpenCL Runtime: Fall back to 64 threads default (from 256) on AMD GPU to prevent creating too many workitems<br />
</li>
<li>OpenCL Runtime: Forcing OpenCL 1.2 no longer needed. Option removed from build options<br />
</li>
<li>OpenCL Runtime: On AMD GPU, recommend AMD ROCm driver for Linux<br />
</li>
<li>Restore: Fixed the version number used in the restore file header<br />
</li>
<li>Time: added new type for time measurements hc_time_t and related functions to force the use of 64 bit times<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v3.6.0]]></title>
			<link>https://hashcat.net/forum/thread-6630.html</link>
			<pubDate>Fri, 09 Jun 2017 15:51:58 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6630.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.6.0 release! This release is mostly about new algorithms added.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode   600 = BLAKE2-512<br />
</li>
<li>Added hash-mode 15200 = Blockchain, My Wallet, V2<br />
</li>
<li>Added hash-mode 15300 = DPAPI masterkey file v1 and v2<br />
</li>
<li>Added hash-mode 15400 = ChaCha20<br />
</li>
<li>Added hash-mode 15500 = JKS Java Key Store Private Keys (SHA1)<br />
</li>
<li>Added hash-mode 15600 = Ethereum Wallet, PBKDF2-HMAC-SHA256<br />
</li>
<li>Added hash-mode 15700 = Ethereum Wallet, PBKDF2-SCRYPT<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>7-Zip cracking: increased max. data length to 320k and removed AES padding attack to avoid false negatives<br />
</li>
<li>Dictionary cache: Show time spent on dictionary cache building at startup<br />
</li>
<li>Rules: Support added for position 'p' (Nth instance of a character) in host mode (using -j or -k)<br />
</li>
<li>Rules: Support added for rejection rule '_N' (reject plains of length not equal to N) in host mode<br />
</li>
<li>Rules: Support added for rule 'eX'<br />
</li>
<li>Wordlist encoding: Added parameters --encoding-from and --encoding-to to configure wordlist encoding handling<br />
</li>
<li>Wordlist encoding: Support added for internal conversion between user-defined encodings during runtime<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds:<br />
<ul class="mycode_list"><li>Workaround added for NVIDIA NVML library: If libnvidia-ml.so couldn't be loaded, try again using libnvidia-ml.so.1<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>WPA cracking: Improved nonce-error-corrections mode to fix corrupt nonces generated on big-endian devices<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a condition that caused hybrid attacks using a maskfile to not select all wordlists from a wordlist folder<br />
</li>
<li>Fixed a memory leak that was present when a user periodically prints hashcat status (using --status-timer)<br />
</li>
<li>Fixed a missing type specifier in a function declaration of the RACF kernel<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: In the binary release packages, link libiconv static for Windows binaries<br />
</li>
<li>Dictstat: Structure for dictstat file changed to include --encoding-from and --encoding-to parameters<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-PRO driver version check, warn if version 17.10 (known to be broken) is detected<br />
</li>
<li>WPA cracking: Reduced --nonce-error-corrections default from 16 to 8 to compensate for speed drop caused by big-endian fixes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.6.0 release! This release is mostly about new algorithms added.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode   600 = BLAKE2-512<br />
</li>
<li>Added hash-mode 15200 = Blockchain, My Wallet, V2<br />
</li>
<li>Added hash-mode 15300 = DPAPI masterkey file v1 and v2<br />
</li>
<li>Added hash-mode 15400 = ChaCha20<br />
</li>
<li>Added hash-mode 15500 = JKS Java Key Store Private Keys (SHA1)<br />
</li>
<li>Added hash-mode 15600 = Ethereum Wallet, PBKDF2-HMAC-SHA256<br />
</li>
<li>Added hash-mode 15700 = Ethereum Wallet, PBKDF2-SCRYPT<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>7-Zip cracking: increased max. data length to 320k and removed AES padding attack to avoid false negatives<br />
</li>
<li>Dictionary cache: Show time spent on dictionary cache building at startup<br />
</li>
<li>Rules: Support added for position 'p' (Nth instance of a character) in host mode (using -j or -k)<br />
</li>
<li>Rules: Support added for rejection rule '_N' (reject plains of length not equal to N) in host mode<br />
</li>
<li>Rules: Support added for rule 'eX'<br />
</li>
<li>Wordlist encoding: Added parameters --encoding-from and --encoding-to to configure wordlist encoding handling<br />
</li>
<li>Wordlist encoding: Support added for internal conversion between user-defined encodings during runtime<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds:<br />
<ul class="mycode_list"><li>Workaround added for NVIDIA NVML library: If libnvidia-ml.so couldn't be loaded, try again using libnvidia-ml.so.1<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>WPA cracking: Improved nonce-error-corrections mode to fix corrupt nonces generated on big-endian devices<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a condition that caused hybrid attacks using a maskfile to not select all wordlists from a wordlist folder<br />
</li>
<li>Fixed a memory leak that was present when a user periodically prints hashcat status (using --status-timer)<br />
</li>
<li>Fixed a missing type specifier in a function declaration of the RACF kernel<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: In the binary release packages, link libiconv static for Windows binaries<br />
</li>
<li>Dictstat: Structure for dictstat file changed to include --encoding-from and --encoding-to parameters<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-PRO driver version check, warn if version 17.10 (known to be broken) is detected<br />
</li>
<li>WPA cracking: Reduced --nonce-error-corrections default from 16 to 8 to compensate for speed drop caused by big-endian fixes<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v3.5.0]]></title>
			<link>https://hashcat.net/forum/thread-6468.html</link>
			<pubDate>Wed, 05 Apr 2017 13:58:03 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6468.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.5.0 release! This is just a smaller update, mostly bugfixes. <br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>WPA cracking: Added support for WPA/WPA2 handshake AP nonce automatic error correction<br />
</li>
<li>WPA cracking: Added optional parameter --nonce-error-corrections to configure range of error correction<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 15100 = Juniper/NetBSD sha1crypt<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Abbreviate long hashes to display the Hash.Target status line within 80 characters<br />
</li>
<li>Refactored internal use of esalt to sync with the number of digests instead of the number of salts<br />
</li>
<li>Refactored other output to display within 80 characters without wrapping<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a hash validation error when trying to load Android FDE &lt; 4.3 hashes<br />
</li>
<li>Fixed a problem where --keyspace combined with custom charsets incorrectly displayed an error message<br />
</li>
<li>Fixed a problem where --stdout combined with custom charsets incorrectly displayed an error message<br />
</li>
<li>Fixed a problem with parsing and displaying -m 7000 = Fortigate (FortiOS) hashes<br />
</li>
<li>Fixed a race condition after sessions finish, where the input-base was freed but accessed afterwards<br />
</li>
<li>Fixed a typo that resulted in the minimum password length not being correctly initialized<br />
</li>
<li>Fixed --outfile-format formats 11 through 15 to show the correct crack position<br />
</li>
<li>Fixed --remove to apply even when all hashes are either found in the potfile or detected in weak-hash checks<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Added missing prototypes for atlassian_parse_hash function<br />
</li>
<li>Dictionary Cache: Split long status line into multiple lines to stay &lt; 80 chars<br />
</li>
<li>Files: Detect and error when users try to use -r with a parameter which is not a file<br />
</li>
<li>HCCAPX Parser: Added support for a special bit (bit 8) of the message_pair that indicates if replay counters match<br />
</li>
<li>Parameter: Detect and error when users try to use an empty string (length 0) for parameters like --session=<br />
</li>
<li>Parameter: Detect and error when users try to use non-digit input when only digits are expected<br />
</li>
<li>Sessions: Improved string comparison in case user sets --session to "hashcat"<br />
</li>
<li>Status View: Add rejected counter to machine-readable output<br />
</li>
<li>Status View: Rename labels Input.Mode, Input.Base, ... to Guess.Mode, Guess.Base, ...<br />
</li>
<li>Status View: Added a visual indicator to the status screen when checkpoint quit has been requested<br />
</li>
<li>Versions: Changed version naming convention from x.yz to x.y.z<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.5.0 release! This is just a smaller update, mostly bugfixes. <br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>WPA cracking: Added support for WPA/WPA2 handshake AP nonce automatic error correction<br />
</li>
<li>WPA cracking: Added optional parameter --nonce-error-corrections to configure range of error correction<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 15100 = Juniper/NetBSD sha1crypt<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improvements:<br />
<ul class="mycode_list"><li>Abbreviate long hashes to display the Hash.Target status line within 80 characters<br />
</li>
<li>Refactored internal use of esalt to sync with the number of digests instead of the number of salts<br />
</li>
<li>Refactored other output to display within 80 characters without wrapping<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a hash validation error when trying to load Android FDE &lt; 4.3 hashes<br />
</li>
<li>Fixed a problem where --keyspace combined with custom charsets incorrectly displayed an error message<br />
</li>
<li>Fixed a problem where --stdout combined with custom charsets incorrectly displayed an error message<br />
</li>
<li>Fixed a problem with parsing and displaying -m 7000 = Fortigate (FortiOS) hashes<br />
</li>
<li>Fixed a race condition after sessions finish, where the input-base was freed but accessed afterwards<br />
</li>
<li>Fixed a typo that resulted in the minimum password length not being correctly initialized<br />
</li>
<li>Fixed --outfile-format formats 11 through 15 to show the correct crack position<br />
</li>
<li>Fixed --remove to apply even when all hashes are either found in the potfile or detected in weak-hash checks<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Added missing prototypes for atlassian_parse_hash function<br />
</li>
<li>Dictionary Cache: Split long status line into multiple lines to stay &lt; 80 chars<br />
</li>
<li>Files: Detect and error when users try to use -r with a parameter which is not a file<br />
</li>
<li>HCCAPX Parser: Added support for a special bit (bit 8) of the message_pair that indicates if replay counters match<br />
</li>
<li>Parameter: Detect and error when users try to use an empty string (length 0) for parameters like --session=<br />
</li>
<li>Parameter: Detect and error when users try to use non-digit input when only digits are expected<br />
</li>
<li>Sessions: Improved string comparison in case user sets --session to "hashcat"<br />
</li>
<li>Status View: Add rejected counter to machine-readable output<br />
</li>
<li>Status View: Rename labels Input.Mode, Input.Base, ... to Guess.Mode, Guess.Base, ...<br />
</li>
<li>Status View: Added a visual indicator to the status screen when checkpoint quit has been requested<br />
</li>
<li>Versions: Changed version naming convention from x.yz to x.y.z<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v3.40]]></title>
			<link>https://hashcat.net/forum/thread-6351.html</link>
			<pubDate>Fri, 03 Mar 2017 16:18:30 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6351.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.40 release!<br />
<br />
The major changes are the following:<br />
<ul class="mycode_list"><li>Added support to crack iTunes backups: <a href="https://hashcat.net/forum/thread-6047.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6047.html</a><br />
</li>
<li>Added support to crack LUKS volumes: <a href="https://hashcat.net/forum/thread-6225.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6225.html</a><br />
</li>
<li>Added support for hccapx files: <a href="https://hashcat.net/forum/thread-6273.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6273.html</a><br />
</li>
</ul>
There's also a ton of <span style="font-weight: bold;" class="mycode_b">bugfixes</span> thanks to some very good reports from the users and others found while adding hashcat to the Coverity CI. <br />
<br />
From a performance perspective, there should be no changes to v3.20/v3.30, here's a detailed comparison: <a href="https://docs.google.com/spreadsheets/d/1B1S_t1Z0KsqByH3pNkYUM-RCFMu860nlfSsYEqOoqco/edit#gid=1439721324" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...1439721324</a><br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Added support for loading hccapx files<br />
</li>
<li>Added support for filtering hccapx message pairs using --hccapx-message-pair<br />
</li>
<li>Added support for parsing 7-Zip hashes with LZMA/LZMA2 compression indicator set to a non-zero value<br />
</li>
<li>Added support for decompressing LZMA1/LZMA2 data for -m 11600 = 7-Zip to validate the CRC<br />
</li>
<li>Added support for automatic merge of LM halfes in case --show and --left is used<br />
</li>
<li>Added support for showing all user names with --show and --left if --username was specified<br />
</li>
<li>Added support for GPU temperature management on cygwin build<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode  1411 = SSHA-256(Base64), LDAP {SSHA256}<br />
</li>
<li>Added hash-mode  3910 = md5(md5(&#36;pass).md5(&#36;salt))<br />
</li>
<li>Added hash-mode  4010 = md5(&#36;salt.md5(&#36;salt.&#36;pass))<br />
</li>
<li>Added hash-mode  4110 = md5(&#36;salt.md5(&#36;pass.&#36;salt))<br />
</li>
<li>Added hash-mode  4520 = sha1(&#36;salt.sha1(&#36;pass))<br />
</li>
<li>Added hash-mode  4522 = PunBB<br />
</li>
<li>Added hash-mode  7000 = Fortigate (FortiOS)<br />
</li>
<li>Added hash-mode 12001 = Atlassian (PBKDF2-HMAC-SHA1)<br />
</li>
<li>Added hash-mode 14600 = LUKS<br />
</li>
<li>Added hash-mode 14700 = iTunes Backup &lt; 10.0<br />
</li>
<li>Added hash-mode 14800 = iTunes Backup &gt;= 10.0<br />
</li>
<li>Added hash-mode 14900 = Skip32<br />
</li>
<li>Added hash-mode 15000 = FileZilla Server &gt;= 0.9.55<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a problem within the Kerberos 5 TGS-REP (-m 13100) hash parser<br />
</li>
<li>Fixed clEnqueueNDRangeKernel(): CL_UNKNOWN_ERROR caused by an invalid work-item count during weak-hash-check<br />
</li>
<li>Fixed cracking of PeopleSoft Token (-m 13500) if salt length + password length is &gt;= 128 byte<br />
</li>
<li>Fixed cracking of Plaintext (-m 99999) in case MD4 was used in a previous session<br />
</li>
<li>Fixed DEScrypt cracking in BF mode in case the hashlist contains more than 16 times the same salt<br />
</li>
<li>Fixed duplicate detection for WPA handshakes with the same ESSID<br />
</li>
<li>Fixed nvapi datatype definition for NvS32 and NvU32<br />
</li>
<li>Fixed overflow in bcrypt kernel in expand_key() function<br />
</li>
<li>Fixed pointer to local variable outside scope in case -j or -k is used<br />
</li>
<li>Fixed pointer to local variable outside scope in case --markov-hcstat is not used<br />
</li>
<li>Fixed recursion in loopback handling when session was aborted by the user<br />
</li>
<li>Fixed rule 'O' (RULE_OP_MANGLE_OMIT) in host mode in case the offset + length parameter equals the length of the input word<br />
</li>
<li>Fixed rule 'i' (RULE_OP_MANGLE_INSERT) in host mode in case the offset parameter equals the length of the input word<br />
</li>
<li>Fixed string not null terminated inside workaround for checking drm driver path<br />
</li>
<li>Fixed string not null terminated while reading maskfiles<br />
</li>
<li>Fixed truncation of password after position 32 with the combinator attack<br />
</li>
<li>Fixed use of option --keyspace in combination with -m 2500 (WPA)<br />
</li>
<li>Fixed WPA/WPA2 cracking in case eapol frame is &gt;= 248 byte<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for AMDGPU-Pro OpenCL runtime: AES encrypt and decrypt Invertkey function was calculated wrong in certain cases<br />
</li>
<li>Workaround added for AMDGPU-Pro OpenCL runtime: RAR3 kernel require a volatile variable to work correctly<br />
</li>
<li>Workaround added for Apple OpenCL runtime: bcrypt kernel requires a volatile variable because of a compiler optimization bug<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: RACF kernel requires EBCDIC lookup to be done on shared memory<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Add SHARED variable to Makefile to choose if hashcat is build as static or shared binary (using libhashcat.so/hashcat.dll)<br />
</li>
<li>Building: Removed compiler option -march=native as this created problems for maintainers on various distributions<br />
</li>
<li>Building: Removed the use of RPATH on linker level<br />
</li>
<li>Building: Replaced linking of CRT_glob.o with the use of int _dowildcard<br />
</li>
<li>Commandline: Do some checks related to custom-charset options if user specifies them<br />
</li>
<li>CPU Affinity: Fixed memory leak in case invalid cpu Id was specified<br />
</li>
<li>Dispatcher: Fixed several memory leaks in case an OpenCL error occurs<br />
</li>
<li>Events: Improved the maximum event message handling. event_log () will now also internally make sure that the message is properly terminated<br />
</li>
<li>File Locking: Improved error detection on file locks<br />
</li>
<li>File Reads: Fixed memory leak in case outfile or hashfile was not accessible<br />
</li>
<li>File Reads: Improved error detection on file reads, especially when getting the file stats<br />
</li>
<li>Files: Do several file and folder checks on startup rather than when they are actually used to avoid related error after eventual intense operations<br />
</li>
<li>Hardware Management: Bring back kernel exec timeout detection for NVidia on user request<br />
</li>
<li>Hardware Monitor: Fixed several memory leaks in case hash-file writing (caused by --remove) failed<br />
</li>
<li>Hardware Monitor: Fixed several memory leaks in case no hardware monitor sensor is found<br />
</li>
<li>Hardware Monitor: In case NVML initialization failed, do not try to initialiaze NVAPI or XNVCTRL because they both depend on NVML<br />
</li>
<li>Hash Parsing: Added additional bound checks for the SIP digest authentication (MD5) parser (-m 11400)<br />
</li>
<li>Hash Parsing: Make sure that all files are correctly closed whenever a hash file parsing error occurs<br />
</li>
<li>Helper: Added functions to check existence, type, read- and write-permissions and rewrite sources to use them instead of stat()<br />
</li>
<li>Keyfile handling: Make sure that the memory is cleanly freed whenever a VeraCrypt/TrueCrypt keyfile fails to load<br />
</li>
<li>Mask Checks: Added additional memory cleanups after parsing/verifying masks<br />
</li>
<li>Mask Checks: Added integer overflow detection for a keyspace of a mask provided by user<br />
</li>
<li>Mask Increment: Fixed memory leak in case mask_append() fails<br />
</li>
<li>OpenCL Device: Do a check on available constant memory size and abort if it's less than 64kB<br />
</li>
<li>OpenCL Device Management: Fixed several memory leaks in case initialization of an OpenCL device or platform failed<br />
</li>
<li>OpenCL Header: Updated CL_* errorcode to OpenCL 1.2 standard<br />
</li>
<li>OpenCL Kernel: Move kernel binary buffer from heap to stack memory<br />
</li>
<li>OpenCL Kernel: Refactored read_kernel_binary to load only a single kernel for a single device<br />
</li>
<li>OpenCL Kernel: Remove "static" keyword from function declarations; Causes older Intel OpenCL runtimes to fail compiling<br />
</li>
<li>OpenCL Kernel: Renumbered hash-mode 7600 to 4521<br />
</li>
<li>OpenCL Runtime: Added a warning about using Mesa OpenCL runtime<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro driver version check, do warn if version 16.60 is detected which is known to be broken<br />
</li>
<li>Outfile Check: Fixed a memory leak for failed outfile reads<br />
</li>
<li>Restore: Add some checks on the rd-&gt;cwd variable in restore case<br />
</li>
<li>Rule Engine: Fixed several memory leaks in case loading of rules failed<br />
</li>
<li>Session Management: Automatically set dedicated session names for non-cracking parameters, for example: --stdout<br />
</li>
<li>Session Management: Fixed several memory leaks in case profile- or install-folder setup failed<br />
</li>
<li>Sessions: Move out handling of multiple instance from restore file into separate pidfile<br />
</li>
<li>Status screen: Do not try to clear prompt in --quiet mode<br />
</li>
<li>Tests: Fixed the timeout status code value and increased the runtime to 400 seconds<br />
</li>
<li>Threads: Restored strerror as %m is unsupported by the BSDs<br />
</li>
<li>Wordlists: Disable dictstat handling for hash-mode 3000 as it virtually creates words in the wordlist which is not the case for other modes<br />
</li>
<li>Wordlists: Fixed memory leak in case access a file in a wordlist folder fails<br />
</li>
<li>WPA: Changed format for outfile and potfile from essid:mac1:mac2 to hash:mac_ap:mac_sta:essid<br />
</li>
<li>WPA: Changed format for outfile_check from essid:mac1:mac2 to hash<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
Welcome to hashcat v3.40 release!<br />
<br />
The major changes are the following:<br />
<ul class="mycode_list"><li>Added support to crack iTunes backups: <a href="https://hashcat.net/forum/thread-6047.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6047.html</a><br />
</li>
<li>Added support to crack LUKS volumes: <a href="https://hashcat.net/forum/thread-6225.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6225.html</a><br />
</li>
<li>Added support for hccapx files: <a href="https://hashcat.net/forum/thread-6273.html" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/forum/thread-6273.html</a><br />
</li>
</ul>
There's also a ton of <span style="font-weight: bold;" class="mycode_b">bugfixes</span> thanks to some very good reports from the users and others found while adding hashcat to the Coverity CI. <br />
<br />
From a performance perspective, there should be no changes to v3.20/v3.30, here's a detailed comparison: <a href="https://docs.google.com/spreadsheets/d/1B1S_t1Z0KsqByH3pNkYUM-RCFMu860nlfSsYEqOoqco/edit#gid=1439721324" target="_blank" rel="noopener" class="mycode_url">https://docs.google.com/spreadsheets/d/1...1439721324</a><br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Added support for loading hccapx files<br />
</li>
<li>Added support for filtering hccapx message pairs using --hccapx-message-pair<br />
</li>
<li>Added support for parsing 7-Zip hashes with LZMA/LZMA2 compression indicator set to a non-zero value<br />
</li>
<li>Added support for decompressing LZMA1/LZMA2 data for -m 11600 = 7-Zip to validate the CRC<br />
</li>
<li>Added support for automatic merge of LM halfes in case --show and --left is used<br />
</li>
<li>Added support for showing all user names with --show and --left if --username was specified<br />
</li>
<li>Added support for GPU temperature management on cygwin build<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode  1411 = SSHA-256(Base64), LDAP {SSHA256}<br />
</li>
<li>Added hash-mode  3910 = md5(md5(&#36;pass).md5(&#36;salt))<br />
</li>
<li>Added hash-mode  4010 = md5(&#36;salt.md5(&#36;salt.&#36;pass))<br />
</li>
<li>Added hash-mode  4110 = md5(&#36;salt.md5(&#36;pass.&#36;salt))<br />
</li>
<li>Added hash-mode  4520 = sha1(&#36;salt.sha1(&#36;pass))<br />
</li>
<li>Added hash-mode  4522 = PunBB<br />
</li>
<li>Added hash-mode  7000 = Fortigate (FortiOS)<br />
</li>
<li>Added hash-mode 12001 = Atlassian (PBKDF2-HMAC-SHA1)<br />
</li>
<li>Added hash-mode 14600 = LUKS<br />
</li>
<li>Added hash-mode 14700 = iTunes Backup &lt; 10.0<br />
</li>
<li>Added hash-mode 14800 = iTunes Backup &gt;= 10.0<br />
</li>
<li>Added hash-mode 14900 = Skip32<br />
</li>
<li>Added hash-mode 15000 = FileZilla Server &gt;= 0.9.55<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed a problem within the Kerberos 5 TGS-REP (-m 13100) hash parser<br />
</li>
<li>Fixed clEnqueueNDRangeKernel(): CL_UNKNOWN_ERROR caused by an invalid work-item count during weak-hash-check<br />
</li>
<li>Fixed cracking of PeopleSoft Token (-m 13500) if salt length + password length is &gt;= 128 byte<br />
</li>
<li>Fixed cracking of Plaintext (-m 99999) in case MD4 was used in a previous session<br />
</li>
<li>Fixed DEScrypt cracking in BF mode in case the hashlist contains more than 16 times the same salt<br />
</li>
<li>Fixed duplicate detection for WPA handshakes with the same ESSID<br />
</li>
<li>Fixed nvapi datatype definition for NvS32 and NvU32<br />
</li>
<li>Fixed overflow in bcrypt kernel in expand_key() function<br />
</li>
<li>Fixed pointer to local variable outside scope in case -j or -k is used<br />
</li>
<li>Fixed pointer to local variable outside scope in case --markov-hcstat is not used<br />
</li>
<li>Fixed recursion in loopback handling when session was aborted by the user<br />
</li>
<li>Fixed rule 'O' (RULE_OP_MANGLE_OMIT) in host mode in case the offset + length parameter equals the length of the input word<br />
</li>
<li>Fixed rule 'i' (RULE_OP_MANGLE_INSERT) in host mode in case the offset parameter equals the length of the input word<br />
</li>
<li>Fixed string not null terminated inside workaround for checking drm driver path<br />
</li>
<li>Fixed string not null terminated while reading maskfiles<br />
</li>
<li>Fixed truncation of password after position 32 with the combinator attack<br />
</li>
<li>Fixed use of option --keyspace in combination with -m 2500 (WPA)<br />
</li>
<li>Fixed WPA/WPA2 cracking in case eapol frame is &gt;= 248 byte<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for AMDGPU-Pro OpenCL runtime: AES encrypt and decrypt Invertkey function was calculated wrong in certain cases<br />
</li>
<li>Workaround added for AMDGPU-Pro OpenCL runtime: RAR3 kernel require a volatile variable to work correctly<br />
</li>
<li>Workaround added for Apple OpenCL runtime: bcrypt kernel requires a volatile variable because of a compiler optimization bug<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: RACF kernel requires EBCDIC lookup to be done on shared memory<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Add SHARED variable to Makefile to choose if hashcat is build as static or shared binary (using libhashcat.so/hashcat.dll)<br />
</li>
<li>Building: Removed compiler option -march=native as this created problems for maintainers on various distributions<br />
</li>
<li>Building: Removed the use of RPATH on linker level<br />
</li>
<li>Building: Replaced linking of CRT_glob.o with the use of int _dowildcard<br />
</li>
<li>Commandline: Do some checks related to custom-charset options if user specifies them<br />
</li>
<li>CPU Affinity: Fixed memory leak in case invalid cpu Id was specified<br />
</li>
<li>Dispatcher: Fixed several memory leaks in case an OpenCL error occurs<br />
</li>
<li>Events: Improved the maximum event message handling. event_log () will now also internally make sure that the message is properly terminated<br />
</li>
<li>File Locking: Improved error detection on file locks<br />
</li>
<li>File Reads: Fixed memory leak in case outfile or hashfile was not accessible<br />
</li>
<li>File Reads: Improved error detection on file reads, especially when getting the file stats<br />
</li>
<li>Files: Do several file and folder checks on startup rather than when they are actually used to avoid related error after eventual intense operations<br />
</li>
<li>Hardware Management: Bring back kernel exec timeout detection for NVidia on user request<br />
</li>
<li>Hardware Monitor: Fixed several memory leaks in case hash-file writing (caused by --remove) failed<br />
</li>
<li>Hardware Monitor: Fixed several memory leaks in case no hardware monitor sensor is found<br />
</li>
<li>Hardware Monitor: In case NVML initialization failed, do not try to initialiaze NVAPI or XNVCTRL because they both depend on NVML<br />
</li>
<li>Hash Parsing: Added additional bound checks for the SIP digest authentication (MD5) parser (-m 11400)<br />
</li>
<li>Hash Parsing: Make sure that all files are correctly closed whenever a hash file parsing error occurs<br />
</li>
<li>Helper: Added functions to check existence, type, read- and write-permissions and rewrite sources to use them instead of stat()<br />
</li>
<li>Keyfile handling: Make sure that the memory is cleanly freed whenever a VeraCrypt/TrueCrypt keyfile fails to load<br />
</li>
<li>Mask Checks: Added additional memory cleanups after parsing/verifying masks<br />
</li>
<li>Mask Checks: Added integer overflow detection for a keyspace of a mask provided by user<br />
</li>
<li>Mask Increment: Fixed memory leak in case mask_append() fails<br />
</li>
<li>OpenCL Device: Do a check on available constant memory size and abort if it's less than 64kB<br />
</li>
<li>OpenCL Device Management: Fixed several memory leaks in case initialization of an OpenCL device or platform failed<br />
</li>
<li>OpenCL Header: Updated CL_* errorcode to OpenCL 1.2 standard<br />
</li>
<li>OpenCL Kernel: Move kernel binary buffer from heap to stack memory<br />
</li>
<li>OpenCL Kernel: Refactored read_kernel_binary to load only a single kernel for a single device<br />
</li>
<li>OpenCL Kernel: Remove "static" keyword from function declarations; Causes older Intel OpenCL runtimes to fail compiling<br />
</li>
<li>OpenCL Kernel: Renumbered hash-mode 7600 to 4521<br />
</li>
<li>OpenCL Runtime: Added a warning about using Mesa OpenCL runtime<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro driver version check, do warn if version 16.60 is detected which is known to be broken<br />
</li>
<li>Outfile Check: Fixed a memory leak for failed outfile reads<br />
</li>
<li>Restore: Add some checks on the rd-&gt;cwd variable in restore case<br />
</li>
<li>Rule Engine: Fixed several memory leaks in case loading of rules failed<br />
</li>
<li>Session Management: Automatically set dedicated session names for non-cracking parameters, for example: --stdout<br />
</li>
<li>Session Management: Fixed several memory leaks in case profile- or install-folder setup failed<br />
</li>
<li>Sessions: Move out handling of multiple instance from restore file into separate pidfile<br />
</li>
<li>Status screen: Do not try to clear prompt in --quiet mode<br />
</li>
<li>Tests: Fixed the timeout status code value and increased the runtime to 400 seconds<br />
</li>
<li>Threads: Restored strerror as %m is unsupported by the BSDs<br />
</li>
<li>Wordlists: Disable dictstat handling for hash-mode 3000 as it virtually creates words in the wordlist which is not the case for other modes<br />
</li>
<li>Wordlists: Fixed memory leak in case access a file in a wordlist folder fails<br />
</li>
<li>WPA: Changed format for outfile and potfile from essid:mac1:mac2 to hash:mac_ap:mac_sta:essid<br />
</li>
<li>WPA: Changed format for outfile_check from essid:mac1:mac2 to hash<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v3.30]]></title>
			<link>https://hashcat.net/forum/thread-6187.html</link>
			<pubDate>Fri, 06 Jan 2017 13:34:59 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6187.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
The refactorization of version 3.20 was so extreme it was almost impossible to not bring in a few bugs. <br />
This version 3.30 is mostly about bugfixes, but there's also some new features and a new hash-mode.<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Files: Use &#36;HEX[...] in case the password includes the separater character, increases potfile reading performance<br />
</li>
<li>Files: If the user specifies a folder to scan for wordlists instead of directly a wordlist, then ignore the hidden files<br />
</li>
<li>Loopback: Include passwords for removed hashes present in the potfile to next loopback iteration<br />
</li>
<li>New option --progress-only: Quickly provides ideal progress step size and time to process based on the user options, then quit<br />
</li>
<li>Status screen: Reenabled automatic status screen display in case of stdin used<br />
</li>
<li>Truecrypt/Veracrypt: Use CRC32 to verify headers instead of fuzzy logic, greatly reduces false positives from 18:2^48 to 3:2^64<br />
</li>
<li>WPA cracking: Reuse PBKDF2 intermediate keys if duplicate essid is detected<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 1300 = SHA-224<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed buffer overflow in status screen display in case of long non-utf8 string<br />
</li>
<li>Fixed buffer overflow in plaintext parsing code: Leading to segfault<br />
</li>
<li>Fixed custom char parsing code in maskfiles in --increment mode: Custom charset wasn't used<br />
</li>
<li>Fixed display screen to show input queue when using custom charset or rules<br />
</li>
<li>Fixed double fclose() using AMDGPU-Pro on sysfs compatible platform: Leading to segfault<br />
</li>
<li>Fixed hash-mode 11400 = SIP digest authentication (MD5): Cracking of hashes which did not include *auth* or *auth-int* was broken<br />
</li>
<li>Fixed hex output of plaintext in case --outfile-format 4, 5, 6 or 7 was used<br />
</li>
<li>Fixed infinite loop when using --loopback in case all hashes have been cracked<br />
</li>
<li>Fixed kernel loops in --increment mode leading to slower performance<br />
</li>
<li>Fixed mask length check in hybrid attack-modes: Do not include hash-mode dependant mask length checks<br />
</li>
<li>Fixed parsing of hashes in case the last line did not include a linefeed character<br />
</li>
<li>Fixed potfile loading to accept blank passwords<br />
</li>
<li>Fixed runtime limit: No longer required so sample startup time after refactorization<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for Intel OpenCL runtime: GPU support is broken, skip the device unless user forces to enable it<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Added hashcat32.dll and hashcat64.dll makefile targets for building hashcat windows libraries<br />
</li>
<li>Building: Added production flag in Makefile to disable all the GCC compiler options needed only for development<br />
</li>
<li>Building: Removed access to readlink() on FreeBSD<br />
</li>
<li>Building: For CYGWIN prefer to use "opencl.dll" (installed by drivers) instead of optional "cygOpenCL-1.dll"<br />
</li>
<li>Events: Added new event EVENT_WEAK_HASH_ALL_CRACKED if all hashes have been cracked during weak hash check<br />
</li>
<li>Hardware management: Switched matching ADL device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching NvAPI device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching NVML device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching xnvctrl device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Removed *throttled* message from NVML as this created more confusion than it helped<br />
</li>
<li>Hash Parser: Improved error detection of invalid hex characters where hex character are expected<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro driver version check, do warn if version 16.50 is detected which is known to be broken<br />
</li>
<li>OpenCL Runtime: Updated hashcat.hctune for Iris Pro GPU on OSX<br />
</li>
<li>Potfile: The default potfile suffix changed but the note about was missing. The "hashcat.pot" became "hashcat.potfile"<br />
</li>
<li>Potfile: Added old potfile detection, show warning message<br />
</li>
<li>Returncode: Added dedicated returncode (see docs/status_codes.txt) for shutdowns caused by --runtime and checkpoint keypress<br />
</li>
<li>Sanity: Added sanity check to disallow --speed-only in combination with -i<br />
</li>
<li>Sanity: Added sanity check to disallow --loopback in combination with --runtime<br />
</li>
<li>Threads: Replaced all calls to ctime() with ctime_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to strerror() with %m printf() GNU extension to ensure thread safety<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
The refactorization of version 3.20 was so extreme it was almost impossible to not bring in a few bugs. <br />
This version 3.30 is mostly about bugfixes, but there's also some new features and a new hash-mode.<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>Files: Use &#36;HEX[...] in case the password includes the separater character, increases potfile reading performance<br />
</li>
<li>Files: If the user specifies a folder to scan for wordlists instead of directly a wordlist, then ignore the hidden files<br />
</li>
<li>Loopback: Include passwords for removed hashes present in the potfile to next loopback iteration<br />
</li>
<li>New option --progress-only: Quickly provides ideal progress step size and time to process based on the user options, then quit<br />
</li>
<li>Status screen: Reenabled automatic status screen display in case of stdin used<br />
</li>
<li>Truecrypt/Veracrypt: Use CRC32 to verify headers instead of fuzzy logic, greatly reduces false positives from 18:2^48 to 3:2^64<br />
</li>
<li>WPA cracking: Reuse PBKDF2 intermediate keys if duplicate essid is detected<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 1300 = SHA-224<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Fixed buffer overflow in status screen display in case of long non-utf8 string<br />
</li>
<li>Fixed buffer overflow in plaintext parsing code: Leading to segfault<br />
</li>
<li>Fixed custom char parsing code in maskfiles in --increment mode: Custom charset wasn't used<br />
</li>
<li>Fixed display screen to show input queue when using custom charset or rules<br />
</li>
<li>Fixed double fclose() using AMDGPU-Pro on sysfs compatible platform: Leading to segfault<br />
</li>
<li>Fixed hash-mode 11400 = SIP digest authentication (MD5): Cracking of hashes which did not include *auth* or *auth-int* was broken<br />
</li>
<li>Fixed hex output of plaintext in case --outfile-format 4, 5, 6 or 7 was used<br />
</li>
<li>Fixed infinite loop when using --loopback in case all hashes have been cracked<br />
</li>
<li>Fixed kernel loops in --increment mode leading to slower performance<br />
</li>
<li>Fixed mask length check in hybrid attack-modes: Do not include hash-mode dependant mask length checks<br />
</li>
<li>Fixed parsing of hashes in case the last line did not include a linefeed character<br />
</li>
<li>Fixed potfile loading to accept blank passwords<br />
</li>
<li>Fixed runtime limit: No longer required so sample startup time after refactorization<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for Intel OpenCL runtime: GPU support is broken, skip the device unless user forces to enable it<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Building: Added hashcat32.dll and hashcat64.dll makefile targets for building hashcat windows libraries<br />
</li>
<li>Building: Added production flag in Makefile to disable all the GCC compiler options needed only for development<br />
</li>
<li>Building: Removed access to readlink() on FreeBSD<br />
</li>
<li>Building: For CYGWIN prefer to use "opencl.dll" (installed by drivers) instead of optional "cygOpenCL-1.dll"<br />
</li>
<li>Events: Added new event EVENT_WEAK_HASH_ALL_CRACKED if all hashes have been cracked during weak hash check<br />
</li>
<li>Hardware management: Switched matching ADL device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching NvAPI device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching NVML device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Switched matching xnvctrl device with OpenCL device by using PCI bus, device and function<br />
</li>
<li>Hardware management: Removed *throttled* message from NVML as this created more confusion than it helped<br />
</li>
<li>Hash Parser: Improved error detection of invalid hex characters where hex character are expected<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro driver version check, do warn if version 16.50 is detected which is known to be broken<br />
</li>
<li>OpenCL Runtime: Updated hashcat.hctune for Iris Pro GPU on OSX<br />
</li>
<li>Potfile: The default potfile suffix changed but the note about was missing. The "hashcat.pot" became "hashcat.potfile"<br />
</li>
<li>Potfile: Added old potfile detection, show warning message<br />
</li>
<li>Returncode: Added dedicated returncode (see docs/status_codes.txt) for shutdowns caused by --runtime and checkpoint keypress<br />
</li>
<li>Sanity: Added sanity check to disallow --speed-only in combination with -i<br />
</li>
<li>Sanity: Added sanity check to disallow --loopback in combination with --runtime<br />
</li>
<li>Threads: Replaced all calls to ctime() with ctime_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to strerror() with %m printf() GNU extension to ensure thread safety<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[hashcat v3.20]]></title>
			<link>https://hashcat.net/forum/thread-6085.html</link>
			<pubDate>Fri, 02 Dec 2016 14:34:19 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://hashcat.net/forum/member.php?action=profile&uid=1">atom</a>]]></dc:creator>
			<guid isPermaLink="false">https://hashcat.net/forum/thread-6085.html</guid>
			<description><![CDATA[<hr class="mycode_hr" />
<br />
The hashcat core was completely refactored to be a MT-safe library (libhashcat).<br />
The goal was to help developers include hashcat into distributed clients or GUI frontends.<br />
The CLI (hashcat.bin or hashcat.exe) works as before but from a technical perspective it's a library frontend.<br />
<br />
There's also new features, new hash-modes, many bugfixes and performance improvements.<br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>New option --speed-only: Quickly provides cracking speed per device based on the user hashes and selected options, then quit<br />
</li>
<li>New option --keep-guessing: Continue cracking hashes even after they have been cracked (to find collisions)<br />
</li>
<li>New option --restore-file-path: Manually override the path to the restore file (useful if we want all session files in the same folder)<br />
</li>
<li>New option --opencl-info: Show details about OpenCL compatible devices like an embedded clinfo tool (useful for bug reports)<br />
</li>
<li>Documents: Added colors for warnings (yellow) and errors (red) instead of WARNING: and ERROR: prefix<br />
</li>
<li>Documents: Added hints presented to the user about optimizing performance while hashcat is running<br />
</li>
<li>Hardware management: Support --gpu-temp-retain for AMDGPU-Pro driver<br />
</li>
<li>Hardware management: Support --powertune-enable for AMDGPU-Pro driver<br />
</li>
<li>Password candidates: Allow words of length &gt; 31 in wordlists for -a 0 for some slow hashes if no rules are in use<br />
</li>
<li>Password candidates: Do not use &#36;HEX[] if the password candidate is a valid UTF-8 string and print out as-is<br />
</li>
<li>Pause mode: Allow quit program also if in pause mode<br />
</li>
<li>Pause mode: Ignore runtime limit in pause mode<br />
</li>
<li>Status view: Show core-clock, memory-clock and execution time in benchmark-mode in case --machine-readable is activated<br />
</li>
<li>Status view: Show temperature, coreclock, memoryclock, fanspeed and pci-lanes for devices using AMDGPU-Pro driver<br />
</li>
<li>Status view: Show the current first and last password candidate test queued for execution per device (as in JtR)<br />
</li>
<li>Status view: Show the current position in the queue for both base and modifier (Example: Wordlist 2/5)<br />
</li>
<li>Markov statistics: Update hashcat.hcstat which is used as reference whenever the user defines a mask<br />
</li>
<li>Charsets: Added lowercase ascii hex (?h) and uppercase ascii hex (?H) as predefined charsets<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 14000 = DES (PT = &#36;salt, key = &#36;pass)<br />
</li>
<li>Added hash-mode 14100 = 3DES (PT = &#36;salt, key = &#36;pass)<br />
</li>
<li>Added hash-mode 14400 = SHA1(CX)<br />
</li>
<li>Added hash-mode 99999 = Plaintext<br />
</li>
<li>Extended hash-mode 3200 = bcrypt: Accept signature &#36;2b&#36; (February 2014)<br />
</li>
<li>Improved hash-mode 8300 = DNSSEC: Additional parsing error detection<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Custom charset from file parsing code did not return an error if an error occured<br />
</li>
<li>Fix some clSetKernelArg() size error that caused slow modes to not work anymore in -a 1 mode<br />
</li>
<li>Hash-mode 11600 = (7-Zip): Depending on input hash a clEnqueueReadBuffer(): CL_INVALID_VALUE error occured<br />
</li>
<li>Hash-mode 22 = Juniper Netscreen/SSG (ScreenOS): Fix salt length for -m 22 in benchmark mode<br />
</li>
<li>Hash-Mode 5500 = NetNTLMv1 + ESS: Fix loading of NetNTLMv1 + SSP hash<br />
</li>
<li>Hash-mode 6000 = RipeMD160: Fix typo in array index number<br />
</li>
<li>If cracking a hash-mode using unicode passwords, length check of a mask was not taking into account<br />
</li>
<li>If cracking a large salted hashlist the wordlist reject code was too slow to handle it, leading to 0H/s<br />
</li>
<li>Null-pointer dereference in outfile-check shutdown code when using --outfile-check-dir, leading to segfault<br />
</li>
<li>On startup hashcat tried to access the folder defined in INSTALL_FOLDER, leading to segfault if that folder was not existing<br />
</li>
<li>Random rules generator code used invalid parameter for memory copy function (M), leading to use of invalid rule<br />
</li>
<li>Sanity check for --outfile-format was broken if used in combination with --show or --left<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for AMDGPU-Pro OpenCL runtime: Failed to compile hash-mode 10700 = PDF 1.7 Level 8<br />
</li>
<li>Workaround added for AMDGPU-Pro OpenCL runtime: Failed to compile hash-mode 1800 = sha512crypt<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: Failed to compile hash-mode 6400 = AIX {ssha256}<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: Failed to compile hash-mode 6800 = Lastpass + Lastpass sniffed<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 10420 = PDF 1.1 - 1.3 (Acrobat 2 - 4)<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 1100 = Domain Cached Credentials (DCC), MS Cache<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 13800 = Windows 8+ phone PIN/Password<br />
</li>
<li>Workaround added for pocl OpenCL runtime: Failed to compile hash-mode 5800 = Android PIN<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improved performance:<br />
<ul class="mycode_list"><li>Improved performance for rule-based attacks for _very_ fast hashes like MD5 and NTLM by 30% or higher<br />
</li>
<li>Improved performance for DEScrypt on AMD, from 373MH/s to 525MH/s<br />
</li>
<li>Improved performance for raw DES-based algorithms (like LM) on AMD, from 1.6GH/s to 12.5GH/s<br />
</li>
<li>Improved performance for raw SHA256-based algorithms using meet-in-the-middle optimization, reduces 7/64 steps<br />
</li>
<li>Improved performance for SAP CODVN B (BCODE) and F/G (PASSCODE) due to register handling optimization up to 25%<br />
</li>
<li>Improved performance by reducing maximum number of allowed function calls per rule from 255 to 31<br />
</li>
<li>Improved performance by update the selection when to use #pragma unroll depending on OpenCL runtime vendor<br />
</li>
</ul>
Full performance comparison sheet v3.10 vs. v3.20: <a href="https://docs.google.com/spreadsheets/d/1B1S_t1Z0KsqByH3pNkYUM-RCFMu860nlfSsYEqOoqco/edit#gid=1591672380" target="_blank" rel="noopener" class="mycode_url">here</a><br />
<br />
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Autotune: Do not run any caching rounds in autotune in DEBUG mode if -n and -u are specified<br />
</li>
<li>Bash completion: Removed some v2.01 leftovers in the bash completion configuration<br />
</li>
<li>Benchmark: Do not control fan speed in benchmark mode<br />
</li>
<li>Benchmark: On OSX, some hash-modes can't compile because of OSX OpenCL runtime. Skip them and move on to the next<br />
</li>
<li>Building: Added Makefile target "main_shared", a small how-to-use libhashcat example<br />
</li>
<li>Building: Added many additional compiler warning flags in Makefile to improve static code error detection<br />
</li>
<li>Building: Added missing includes for FreeBSD<br />
</li>
<li>Building: Added some types for windows only in case _BASETSD_H was not set<br />
</li>
<li>Building: Changed Makefile to strip symbols in the linker instead of the compiler<br />
</li>
<li>Building: Defined NOMINMAX macro to prevent definition min and max macros in stdlib header files<br />
</li>
<li>Building: Enabled ASLR and DEP for Windows builds<br />
</li>
<li>Building: Fixed almost all errors reported by cppcheck and scan-build<br />
</li>
<li>Building: On OSX, move '-framework OpenCL' from CFLAGS to LDFLAGS<br />
</li>
<li>Building: On OSX, use clang as default compiler<br />
</li>
<li>Building: Support building on Msys2 environment<br />
</li>
<li>Building: Use .gitmodules to simplify the OpenCL header dependency handling process<br />
</li>
<li>Charsets: Added DES_full.charset<br />
</li>
<li>Data Types: Replaced all integer macros with enumerator types<br />
</li>
<li>Data Types: Replaced all integer variables with true bool variables in case they are used as a bool<br />
</li>
<li>Data Types: Replaced all string macros with static const char types<br />
</li>
<li>Data Types: Replaced all uint and uint32_t to u32<br />
</li>
<li>Data Types: Replaced atoi() with atoll(). Eliminates sign conversion warnings<br />
</li>
<li>Documents: Added docs/credits.txt<br />
</li>
<li>Documents: Added docs/team.txt<br />
</li>
<li>Documents: Changed rules.txt to match v3.20 limitations<br />
</li>
<li>Error handling (file handling): Fixed a couple of filepointer leaks<br />
</li>
<li>Error handling (format strings): Fixed a few printf() formats, ex: use %u instead of %d for uint32_t<br />
</li>
<li>Error handling (memory allocation): Removed memory allocation checks, just print to stderr instead<br />
</li>
<li>Error handling (startup): Added some missing returncode checks to get_exec_path()<br />
</li>
<li>Fanspeed: Check both fanpolicy and fanspeed returncode and disable retain support if any of them fail<br />
</li>
<li>Fanspeed: Minimum fanspeed for retain support increased to 33%, same as NV uses as default on windows<br />
</li>
<li>Fanspeed: Reset PID controler settings to what they were initially<br />
</li>
<li>Fanspeed: Set fan speed to default on quit<br />
</li>
<li>File handling: Do a single write test (for files to be written later) directly on startup<br />
</li>
<li>File locking: Use same locking mechanism in potfile as in outfile<br />
</li>
<li>Hardware management: Fixed calling conventions for ADL, NvAPI and NVML on windows<br />
</li>
<li>Hardware management: Improved checking for successfull load of the NVML API<br />
</li>
<li>Hardware management: In case fanspeed can not be set, disable --gpu-temp-retain automatically<br />
</li>
<li>Hardware management: In case of initialization error show it only once to the user on startup<br />
</li>
<li>Hardware management: Refactored all code to return returncode (0 or -1) instead of data for more easy error handling<br />
</li>
<li>Hardware management: Refactored macros to real functions<br />
</li>
<li>Hardware management: Removed kernel exec timeout detection on NVIDIA, should no longer occur due to autotune<br />
</li>
<li>Hardware management: Replaced NVML registry functions macros with their ascii versions (Adds NVML support for XP)<br />
</li>
<li>Hashlist loading: Do not load data from hashfile if hashfile changed during runtime<br />
</li>
<li>Kernel cache: Fixed checksum building on oversized device version or driver version strings<br />
</li>
<li>Logging: Improved variable names in hashcat.log<br />
</li>
<li>Loopback: Refactored --loopback support completely, no longer a recursive function<br />
</li>
<li>Memory management: Fixed some memory leaks on shutdown<br />
</li>
<li>Memory management: Got rid of all global variables<br />
</li>
<li>Memory management: Got rid of local_free() and global_free(), no longer required<br />
</li>
<li>Memory management: Refactored all variables with HCBUFSIZ_LARGE size from stack to heap, OSX doesn't like that<br />
</li>
<li>OpenCL Headers: Select OpenCL headers tagged for OpenCL 1.2, since we use -cl-std=CL1.2<br />
</li>
<li>OpenCL Kernels: Added const qualifier to variable declaration of matching global memory objects<br />
</li>
<li>OpenCL Kernels: Got rid of one global kernel_threads variable<br />
</li>
<li>OpenCL Kernels: Moved OpenCL requirement from v1.1 to v1.2<br />
</li>
<li>OpenCL Kernels: Recognize reqd_work_group_size() values from OpenCL kernels and use them in the host if possible<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x01()<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x02()<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x80()<br />
</li>
<li>OpenCL Kernels: Refactored rule function append_block1()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_delete_last()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_dupechar_last()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_rotate_left()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_rotate_right()<br />
</li>
<li>OpenCL Kernels: Support mixed kernel thread count for mixed kernels in the same source file<br />
</li>
<li>OpenCL Kernels: Switch from clz() to ffz() for bitsliced algorithms<br />
</li>
<li>OpenCL Kernels: Using platform vendor name is better than using device vendor name for function detection<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro and AMD Radeon driver version check<br />
</li>
<li>OpenCL Runtime: Updated Intel OpenCL runtime version check<br />
</li>
<li>OpenCL Runtime: Updated NVIDIA driver version check<br />
</li>
<li>Password candidates: The maximum word length in a wordlist is 31 not 32, because 0x80 will eventually be appended<br />
</li>
<li>Potfile: Base logic switched; Assuming the potfile is larger than the hashlist it's better to load hashlist instead of potfile entries<br />
</li>
<li>Potfile: In case all hashes were cracking using potfile abort and inform user<br />
</li>
<li>Restore: Automatically unlink restore file if all hashes have been cracked<br />
</li>
<li>Restore: Do not unlink restore file if restore is disabled<br />
</li>
<li>Rules: Refactored macros to real functions<br />
</li>
<li>Status: Added Input.Queue.Base and Input.Queue.Mod to help the user better understand this concept<br />
</li>
<li>Status: Do not wait for the progress mutex to read and store speed timer<br />
</li>
<li>Status: Do not show Recovered/Time when cracking &lt; 1000 hashes<br />
</li>
<li>Status: Do not show Recovered/Time as floats but as integers to reduce over-information<br />
</li>
<li>Tests: Removed rules_test/ subproject: Would require total rewrite but not used in a long time<br />
</li>
<li>Threads: Replaced all calls to getpwuid() with getpwuid_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to gmtime() with gmtime_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to strtok() with strtok_r() to ensure thread safety<br />
</li>
<li>Wordlists: Use larger counter variable to handle larger wordlists (that is &gt; 2^32 words)<br />
</li>
<li>X11: Detect missing coolbits and added some help text for the user how to fix it<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></description>
			<content:encoded><![CDATA[<hr class="mycode_hr" />
<br />
The hashcat core was completely refactored to be a MT-safe library (libhashcat).<br />
The goal was to help developers include hashcat into distributed clients or GUI frontends.<br />
The CLI (hashcat.bin or hashcat.exe) works as before but from a technical perspective it's a library frontend.<br />
<br />
There's also new features, new hash-modes, many bugfixes and performance improvements.<br />
<br />
I recommend upgrading even if you did not face any errors with older versions.<br />
<br />
Thanks to everyone who contributed to this release!!!<br />
<br />
<hr class="mycode_hr" />
<br />
Download here: <a href="https://hashcat.net/hashcat/" target="_blank" rel="noopener" class="mycode_url">https://hashcat.net/hashcat/</a><br />
<br />
<hr class="mycode_hr" />
<br />
Features:<br />
<ul class="mycode_list"><li>New option --speed-only: Quickly provides cracking speed per device based on the user hashes and selected options, then quit<br />
</li>
<li>New option --keep-guessing: Continue cracking hashes even after they have been cracked (to find collisions)<br />
</li>
<li>New option --restore-file-path: Manually override the path to the restore file (useful if we want all session files in the same folder)<br />
</li>
<li>New option --opencl-info: Show details about OpenCL compatible devices like an embedded clinfo tool (useful for bug reports)<br />
</li>
<li>Documents: Added colors for warnings (yellow) and errors (red) instead of WARNING: and ERROR: prefix<br />
</li>
<li>Documents: Added hints presented to the user about optimizing performance while hashcat is running<br />
</li>
<li>Hardware management: Support --gpu-temp-retain for AMDGPU-Pro driver<br />
</li>
<li>Hardware management: Support --powertune-enable for AMDGPU-Pro driver<br />
</li>
<li>Password candidates: Allow words of length &gt; 31 in wordlists for -a 0 for some slow hashes if no rules are in use<br />
</li>
<li>Password candidates: Do not use &#36;HEX[] if the password candidate is a valid UTF-8 string and print out as-is<br />
</li>
<li>Pause mode: Allow quit program also if in pause mode<br />
</li>
<li>Pause mode: Ignore runtime limit in pause mode<br />
</li>
<li>Status view: Show core-clock, memory-clock and execution time in benchmark-mode in case --machine-readable is activated<br />
</li>
<li>Status view: Show temperature, coreclock, memoryclock, fanspeed and pci-lanes for devices using AMDGPU-Pro driver<br />
</li>
<li>Status view: Show the current first and last password candidate test queued for execution per device (as in JtR)<br />
</li>
<li>Status view: Show the current position in the queue for both base and modifier (Example: Wordlist 2/5)<br />
</li>
<li>Markov statistics: Update hashcat.hcstat which is used as reference whenever the user defines a mask<br />
</li>
<li>Charsets: Added lowercase ascii hex (?h) and uppercase ascii hex (?H) as predefined charsets<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Algorithms:<br />
<ul class="mycode_list"><li>Added hash-mode 14000 = DES (PT = &#36;salt, key = &#36;pass)<br />
</li>
<li>Added hash-mode 14100 = 3DES (PT = &#36;salt, key = &#36;pass)<br />
</li>
<li>Added hash-mode 14400 = SHA1(CX)<br />
</li>
<li>Added hash-mode 99999 = Plaintext<br />
</li>
<li>Extended hash-mode 3200 = bcrypt: Accept signature &#36;2b&#36; (February 2014)<br />
</li>
<li>Improved hash-mode 8300 = DNSSEC: Additional parsing error detection<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Fixed Bugs:<br />
<ul class="mycode_list"><li>Custom charset from file parsing code did not return an error if an error occured<br />
</li>
<li>Fix some clSetKernelArg() size error that caused slow modes to not work anymore in -a 1 mode<br />
</li>
<li>Hash-mode 11600 = (7-Zip): Depending on input hash a clEnqueueReadBuffer(): CL_INVALID_VALUE error occured<br />
</li>
<li>Hash-mode 22 = Juniper Netscreen/SSG (ScreenOS): Fix salt length for -m 22 in benchmark mode<br />
</li>
<li>Hash-Mode 5500 = NetNTLMv1 + ESS: Fix loading of NetNTLMv1 + SSP hash<br />
</li>
<li>Hash-mode 6000 = RipeMD160: Fix typo in array index number<br />
</li>
<li>If cracking a hash-mode using unicode passwords, length check of a mask was not taking into account<br />
</li>
<li>If cracking a large salted hashlist the wordlist reject code was too slow to handle it, leading to 0H/s<br />
</li>
<li>Null-pointer dereference in outfile-check shutdown code when using --outfile-check-dir, leading to segfault<br />
</li>
<li>On startup hashcat tried to access the folder defined in INSTALL_FOLDER, leading to segfault if that folder was not existing<br />
</li>
<li>Random rules generator code used invalid parameter for memory copy function (M), leading to use of invalid rule<br />
</li>
<li>Sanity check for --outfile-format was broken if used in combination with --show or --left<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Workarounds added:<br />
<ul class="mycode_list"><li>Workaround added for AMDGPU-Pro OpenCL runtime: Failed to compile hash-mode 10700 = PDF 1.7 Level 8<br />
</li>
<li>Workaround added for AMDGPU-Pro OpenCL runtime: Failed to compile hash-mode 1800 = sha512crypt<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: Failed to compile hash-mode 6400 = AIX {ssha256}<br />
</li>
<li>Workaround added for NVidia OpenCL runtime: Failed to compile hash-mode 6800 = Lastpass + Lastpass sniffed<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 10420 = PDF 1.1 - 1.3 (Acrobat 2 - 4)<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 1100 = Domain Cached Credentials (DCC), MS Cache<br />
</li>
<li>Workaround added for OSX OpenCL runtime: Failed to compile hash-mode 13800 = Windows 8+ phone PIN/Password<br />
</li>
<li>Workaround added for pocl OpenCL runtime: Failed to compile hash-mode 5800 = Android PIN<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
Improved performance:<br />
<ul class="mycode_list"><li>Improved performance for rule-based attacks for _very_ fast hashes like MD5 and NTLM by 30% or higher<br />
</li>
<li>Improved performance for DEScrypt on AMD, from 373MH/s to 525MH/s<br />
</li>
<li>Improved performance for raw DES-based algorithms (like LM) on AMD, from 1.6GH/s to 12.5GH/s<br />
</li>
<li>Improved performance for raw SHA256-based algorithms using meet-in-the-middle optimization, reduces 7/64 steps<br />
</li>
<li>Improved performance for SAP CODVN B (BCODE) and F/G (PASSCODE) due to register handling optimization up to 25%<br />
</li>
<li>Improved performance by reducing maximum number of allowed function calls per rule from 255 to 31<br />
</li>
<li>Improved performance by update the selection when to use #pragma unroll depending on OpenCL runtime vendor<br />
</li>
</ul>
Full performance comparison sheet v3.10 vs. v3.20: <a href="https://docs.google.com/spreadsheets/d/1B1S_t1Z0KsqByH3pNkYUM-RCFMu860nlfSsYEqOoqco/edit#gid=1591672380" target="_blank" rel="noopener" class="mycode_url">here</a><br />
<br />
<hr class="mycode_hr" />
<br />
Technical:<br />
<ul class="mycode_list"><li>Autotune: Do not run any caching rounds in autotune in DEBUG mode if -n and -u are specified<br />
</li>
<li>Bash completion: Removed some v2.01 leftovers in the bash completion configuration<br />
</li>
<li>Benchmark: Do not control fan speed in benchmark mode<br />
</li>
<li>Benchmark: On OSX, some hash-modes can't compile because of OSX OpenCL runtime. Skip them and move on to the next<br />
</li>
<li>Building: Added Makefile target "main_shared", a small how-to-use libhashcat example<br />
</li>
<li>Building: Added many additional compiler warning flags in Makefile to improve static code error detection<br />
</li>
<li>Building: Added missing includes for FreeBSD<br />
</li>
<li>Building: Added some types for windows only in case _BASETSD_H was not set<br />
</li>
<li>Building: Changed Makefile to strip symbols in the linker instead of the compiler<br />
</li>
<li>Building: Defined NOMINMAX macro to prevent definition min and max macros in stdlib header files<br />
</li>
<li>Building: Enabled ASLR and DEP for Windows builds<br />
</li>
<li>Building: Fixed almost all errors reported by cppcheck and scan-build<br />
</li>
<li>Building: On OSX, move '-framework OpenCL' from CFLAGS to LDFLAGS<br />
</li>
<li>Building: On OSX, use clang as default compiler<br />
</li>
<li>Building: Support building on Msys2 environment<br />
</li>
<li>Building: Use .gitmodules to simplify the OpenCL header dependency handling process<br />
</li>
<li>Charsets: Added DES_full.charset<br />
</li>
<li>Data Types: Replaced all integer macros with enumerator types<br />
</li>
<li>Data Types: Replaced all integer variables with true bool variables in case they are used as a bool<br />
</li>
<li>Data Types: Replaced all string macros with static const char types<br />
</li>
<li>Data Types: Replaced all uint and uint32_t to u32<br />
</li>
<li>Data Types: Replaced atoi() with atoll(). Eliminates sign conversion warnings<br />
</li>
<li>Documents: Added docs/credits.txt<br />
</li>
<li>Documents: Added docs/team.txt<br />
</li>
<li>Documents: Changed rules.txt to match v3.20 limitations<br />
</li>
<li>Error handling (file handling): Fixed a couple of filepointer leaks<br />
</li>
<li>Error handling (format strings): Fixed a few printf() formats, ex: use %u instead of %d for uint32_t<br />
</li>
<li>Error handling (memory allocation): Removed memory allocation checks, just print to stderr instead<br />
</li>
<li>Error handling (startup): Added some missing returncode checks to get_exec_path()<br />
</li>
<li>Fanspeed: Check both fanpolicy and fanspeed returncode and disable retain support if any of them fail<br />
</li>
<li>Fanspeed: Minimum fanspeed for retain support increased to 33%, same as NV uses as default on windows<br />
</li>
<li>Fanspeed: Reset PID controler settings to what they were initially<br />
</li>
<li>Fanspeed: Set fan speed to default on quit<br />
</li>
<li>File handling: Do a single write test (for files to be written later) directly on startup<br />
</li>
<li>File locking: Use same locking mechanism in potfile as in outfile<br />
</li>
<li>Hardware management: Fixed calling conventions for ADL, NvAPI and NVML on windows<br />
</li>
<li>Hardware management: Improved checking for successfull load of the NVML API<br />
</li>
<li>Hardware management: In case fanspeed can not be set, disable --gpu-temp-retain automatically<br />
</li>
<li>Hardware management: In case of initialization error show it only once to the user on startup<br />
</li>
<li>Hardware management: Refactored all code to return returncode (0 or -1) instead of data for more easy error handling<br />
</li>
<li>Hardware management: Refactored macros to real functions<br />
</li>
<li>Hardware management: Removed kernel exec timeout detection on NVIDIA, should no longer occur due to autotune<br />
</li>
<li>Hardware management: Replaced NVML registry functions macros with their ascii versions (Adds NVML support for XP)<br />
</li>
<li>Hashlist loading: Do not load data from hashfile if hashfile changed during runtime<br />
</li>
<li>Kernel cache: Fixed checksum building on oversized device version or driver version strings<br />
</li>
<li>Logging: Improved variable names in hashcat.log<br />
</li>
<li>Loopback: Refactored --loopback support completely, no longer a recursive function<br />
</li>
<li>Memory management: Fixed some memory leaks on shutdown<br />
</li>
<li>Memory management: Got rid of all global variables<br />
</li>
<li>Memory management: Got rid of local_free() and global_free(), no longer required<br />
</li>
<li>Memory management: Refactored all variables with HCBUFSIZ_LARGE size from stack to heap, OSX doesn't like that<br />
</li>
<li>OpenCL Headers: Select OpenCL headers tagged for OpenCL 1.2, since we use -cl-std=CL1.2<br />
</li>
<li>OpenCL Kernels: Added const qualifier to variable declaration of matching global memory objects<br />
</li>
<li>OpenCL Kernels: Got rid of one global kernel_threads variable<br />
</li>
<li>OpenCL Kernels: Moved OpenCL requirement from v1.1 to v1.2<br />
</li>
<li>OpenCL Kernels: Recognize reqd_work_group_size() values from OpenCL kernels and use them in the host if possible<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x01()<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x02()<br />
</li>
<li>OpenCL Kernels: Refactored common function append_0x80()<br />
</li>
<li>OpenCL Kernels: Refactored rule function append_block1()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_delete_last()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_dupechar_last()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_rotate_left()<br />
</li>
<li>OpenCL Kernels: Refactored rule function rule_op_mangle_rotate_right()<br />
</li>
<li>OpenCL Kernels: Support mixed kernel thread count for mixed kernels in the same source file<br />
</li>
<li>OpenCL Kernels: Switch from clz() to ffz() for bitsliced algorithms<br />
</li>
<li>OpenCL Kernels: Using platform vendor name is better than using device vendor name for function detection<br />
</li>
<li>OpenCL Runtime: Updated AMDGPU-Pro and AMD Radeon driver version check<br />
</li>
<li>OpenCL Runtime: Updated Intel OpenCL runtime version check<br />
</li>
<li>OpenCL Runtime: Updated NVIDIA driver version check<br />
</li>
<li>Password candidates: The maximum word length in a wordlist is 31 not 32, because 0x80 will eventually be appended<br />
</li>
<li>Potfile: Base logic switched; Assuming the potfile is larger than the hashlist it's better to load hashlist instead of potfile entries<br />
</li>
<li>Potfile: In case all hashes were cracking using potfile abort and inform user<br />
</li>
<li>Restore: Automatically unlink restore file if all hashes have been cracked<br />
</li>
<li>Restore: Do not unlink restore file if restore is disabled<br />
</li>
<li>Rules: Refactored macros to real functions<br />
</li>
<li>Status: Added Input.Queue.Base and Input.Queue.Mod to help the user better understand this concept<br />
</li>
<li>Status: Do not wait for the progress mutex to read and store speed timer<br />
</li>
<li>Status: Do not show Recovered/Time when cracking &lt; 1000 hashes<br />
</li>
<li>Status: Do not show Recovered/Time as floats but as integers to reduce over-information<br />
</li>
<li>Tests: Removed rules_test/ subproject: Would require total rewrite but not used in a long time<br />
</li>
<li>Threads: Replaced all calls to getpwuid() with getpwuid_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to gmtime() with gmtime_r() to ensure thread safety<br />
</li>
<li>Threads: Replaced all calls to strtok() with strtok_r() to ensure thread safety<br />
</li>
<li>Wordlists: Use larger counter variable to handle larger wordlists (that is &gt; 2^32 words)<br />
</li>
<li>X11: Detect missing coolbits and added some help text for the user how to fix it<br />
</li>
</ul>
<hr class="mycode_hr" />
<br />
- atom]]></content:encoded>
		</item>
	</channel>
</rss>