How not to salt a hash
#1
Yesterday I've send two tweets about an exploiteable issue in the ColdFusion 11 password hashes.
It turns out those hash scheme is used already in ColdFusion 10 :) However, I've been asked about the details of this issue and I thought it's better to explain it in a forum post. This is a problem that many application developers could run into and maybe they can learn from it. It's some deep, implementation specific issue that (AFAIK) no crypto library, which typically handles also hashes, protects you from.

The result is that we end up in an A:D (Attacker:Defender) ratio which is worse than 1:1. Worse, for the defender. It means an attacker can compute a hash with less computation steps/instructions needed than the defender needs to. A good password hash (or KDF) should be designed to be not optimizeable (or at least not only for the attacker) and ideally stay as close as possible to 1:1.

Now, to the details. CF10+ uses the following hashing scheme:

Code:
sha256($salt.sha1($pass))

Typically we see such schemes when developers want to harden the security of an already in-use password scheme. To avoid forcing the users to re-enter their passwords they simply use the hash result of the weaker scheme as input for the newer, more secure scheme. In this example, I'd assume the password was hashed in a raw, unsalted sha1 hash before. When CF developers realized it's better to salt a password hash they changed the scheme and while they did it, they also switched to sha2 which is known to be more secure than sha1.

Such hash-in-hash constructs are ok, but only if you do it carefull. In this case it's not, as the attacker now can optimize the sha256 transformation to remove a few ADD's inside the sha256 transform function because the bytes 41-60 are now always set to 0 because of the reasons following. If you want to learn more about those zero based optimizations see:

https://hashcat.net/events/p13/js-ocohaaaa.pdf

However, this is not the real problem here. To understand this problem we first need to understand how hashes works on a very low level. Beside the fact that they were optimized to execute very fast, they were also designed to handle large files or data in general of an unknown length. For example to compute a checksum of that data. But then, the most hashes work block-wise. They have a fixed sized buffer on which they copy chunks of the data which is typically of size 64 bytes for 32 bit algorithm (md5, sha1, sha256) and 128 byte for a 64 bit based algorithm (sha512). So how can they produce a checksum for a file > 64 byte? The answer is that they use the so called "Merkle–Damgård construction":

https://en.wikipedia.org/wiki/Merkle%E2%...nstruction

To make this more easy to understand, every hash that is compute uses a context they operate on. This context is initialized with a fixed value and will be used on the first transformation, means the first 64 byte of data for sha256. When it finishes, the result is added to the value on the context and therefore becomes the new initial context value for the next transformation call. This is a) why you end up in the same hash value if you hash the same data twice which enables you to compare it and b) if we would hash the same buffer again in a second transformation the result would be different because the initial value changed based on the previous run.

And this is where the problem which the CF10 scheme is. Because of the following facts:
  • The salt is known to the attacker (always the case, that's ok)
  • The salt is of fixed size 64 byte
  • The salt (or any other known to the attacker data) is on start of the data
  • The length of the salt equals or is greater than the blocksize of the hash (this is it!)
It's clear that the first call of the hash will always produce the same context value! In our case, the salt is always of length 64 which perfectly matches the blocksize of sha256. So an attacker does not need to do that calculation again and again while he's guessing the password. He just does it once on startup and then uses this value as the initialization value for the sha256 context. Now he needs only one call to sha256 but the defender needs two calls and this is what lowers the A:D in favour of the attacker.

There's also the fact that sha1($pass) is not salted. An attacker could use a precomputed set of sha1 results of his password guesses to even more speed up the guessing process. Now all he needs to do is to call 1 single sha256 transformation while the defender has 2 calls of sha256 plus 1 call to sha1.

So what's the lesson learned from this?

A) A shorter salt (but not too short) would be a better selection. To avoid a scenario where an attacker can precompute the initialization value for the second transformation you just need to make sure the attacker can not know (or easily guess) the full data block of the first hash transformation call.

B) Using a raw hash as password "storage" is a bad idea. Hashes are designed to run fast and you don't want the attacker to be able to do that. Therefore you better stick to a hash that is designed to be used for passwords. At this time i'd not like to recommend you one specific. But for the future, when the final hash from https://password-hashing.net/ is selected, this will be a good choice.

--
atom


Messages In This Thread
How not to salt a hash - by atom - 06-12-2015, 11:03 AM
RE: How not to salt a hash - by Rolf - 06-12-2015, 11:52 AM