Talk:The SHA-3 Zoo
I'm thinking about introducing another column to the list of submissions to provide a rough, overall classification of the candidates (e.g. classical Merkle-Damgaard vs. HAIFA vs. sponge vs. tree-based vs. streaming vs. ...), motivated by private messages I've got comparing the current SHA-3 Zoo with my old hash lounge.
However, finding the most appropriate category for some submissions may be a tough task; paradigms may be so distorted as to be nearly unrecognizable. Still, other candidates exhibit a much more transparent structure, and I think this information may be useful (e.g. comparing submissions that fall on distinct categories may not be as fair as comparing functions that share a high-level structure).
Would such a modification be welcome to the SHA-3 Zoo contributors?
I think this would be a lot of effort for a relatively minor added value; as you observe, many candidates are likely to use "uncategorizable" modes of operations. How one would classify CubeHash? It has similarities with a sponge constructions, but is not a sponge in general. Also, both MD6 and ESSENCE have a tree construction, but with different arities, parameters, etc. Finding the best tradeoff precision/readability seems difficult...
Well, I don't see it as too much effort -- for me at any rate; I'm not asking that somebody else do the hard work ☺. Rather, I think it's part of trying to understand how each submission works, and it could also suggest lines of attack (particularly where the actual functions deviate from previously analyzed constructions). Besides, in cases where the authors disagree of a tentative category it might shed new light on those authors' original intent.
Addendum: as far as I could tell, the overall structure of the currently known proposals seems to be the following (disclaimer: I may be completely mistaken in many cases):
|Hash Function Name||Status||External Cryptanalysis||Tentative Classification|
|BLAKE||submitted||none||HAIFA/? [narrow pipe]|
|Blue Midnight Wish||submitted||yes||sponge? [wide pipe]|
|CHI||submitted||none||Merkle-Damgård/Davies-Meyer [wide pipe]|
|CRUNCH||submitted||none||Merkle-Damgård/concatenate-permute-truncate [narrow pipe]|
|CubeHash||submitted||yes||sponge [wide pipe]|
|submitted||☠||Merkle-Damgård/Miyaguchi-Preneel [narrow pipe]|
|Dynamic SHA||submitted||none||? [?]|
|Dynamic SHA2||submitted||none||? [?]|
|ESSENCE||submitted||none||Merkle tree [narrow pipe]|
|FSB||submitted||none||Merkle-Damgård/concatenate-permute-truncate [wide pipe]|
|Fugue||submitted||none||sponge? [wide pipe]|
|Grøstl||submitted||yes||sponge? Merkle-Damgård/Davies-Meyer? [wide pipe]|
|JH||submitted||none||sponge [wide pipe]|
|Keccak||submitted||none||sponge [wide pipe]|
|LANE||submitted||none||HAIFA/concatenate-permute-truncate or Damgård interleaving [narrow pipe]|
|Maraca||submitted||none||sponge? [wide pipe]|
|MD6||submitted||yes||bounded-height Merkle tree [wide pipe]|
|NaSHA||submitted||none||sponge? [narrow pipe]|
|Sarmal||submitted||yes||HAIFA/Davies-Meyer [narrow pipe]|
|submitted||☠||Merkle-Damgård/Davies-Meyer [wide pipe]|
|SHAMATA||submitted||none||sponge [wide pipe]|
|Skein||submitted||none||Merkle-Damgård/UBI? Merkle tree? [narrow pipe]|
|Spectral Hash||submitted||yes||Merkle-Damgård/prism? [narrow pipe]|
|SWIFFTX||submitted||none||HAIFA/concatenate-permute-truncate [wide pipe]|
|Vortex||submitted||yes||Merkle-Damgård/Vortex-block? [wide pipe]|
|submitted||☠||sponge [wide pipe]|
I'm in favour of adding more infos to this page. Seems like a good first shot. But surely we have to put a disclaimer to this category saying something like "this column can never we entirely correct as we would need almost 64 categories...".
Regarding your current categorization. Why not distinguish designs that are based on a small number of permutations from designs based on a huge number of permutations (e.g. block-cipher based). This seems a crucial difference to me. On the other hand, do we really want to distinguish HAIFA from Merkle-Damgaard? The former is an extension of the later. Also, what is your way to distinguish between sponge and streaming?
Oh, I'm definitely thinking about adding a disclaimer. Regarding HAIFA vs. MD, I wrote HAIFA when the authors explicitly state so in the documentation. I tend to call "sponge" a construction that inserts a message in "blocks" (related to the abstract design) in a "simple" way (e.g. via some block-oriented group operation), and "stream" a construction oriented toward "words" (related to popular target platforms) mixed into the state through a "complicated" operation (I admit this is rather informal to say the least); also, I again adhere to the authors' statement when they claim a design is streaming. As for permutations vs. block ciphers, I've been thinking about this... but perhaps it's better to discuss the subject privately before, so I can check my own understanding of a few concepts. And of course I'm entirely open to revising a classification if there is evidence of a mistaken prior assessment.
We can follow Orr and say that "everything is HAIFA" ;)
More seriously: more info would of course be valuable, but accurate information seems in this case difficult (and maybe impossible) to provide. All the functions are based on a compression function (whatever the designers say to sound original), then the variations are: how the iteration is performed? (linear or tree), how large is the state?, how many rounds are recommended and how many are broken? (it would be interesting to give this ratio, but often there's more than the "round" parameter, see eg CubeHash), are there additional inputs? (salt, key, counter, etc.).
The iteration mode seems to be linear in most of the submissions, so providing this info may not be that useful. However it could be interesting and easy to add a column "state bitsize". If we want to say how many rounds are broken, we'll reduce to the same problem as we have with the "external cryptanalysis" column with "what is broken".
I just wish to say that the terminology about sponge sometimes seems to spread across things that are not sponge functions according to the definition in our paper Sponge Functions. I have not checked all the entries marked "sponge" in the table above, but I have some doubts about whether these hash functions actually use the sponge construction. For instance, I checked JH and it does not seem they use the sponge construction. Instead, they use MD and a compression function (built on top of a permutation). Also, RadioGatún seems to be sometimes described as a sponge function, when it is not, see .
Hi, I agree with those saying that a categorization can never be exact. A possibility is to collect a list of headlines such as "Merkle-Damgård", "Sponge", "Block cipher-based", "Permutation-based" etc., and state an indication as to which degree each hash function can be said to fall into each category. As an example, we say that Grøstl is permutation-based, but as Paulo showed, it can also be seen as being block cipher-based, so on a scale from, e.g., 0-4, Grøstl may be permutation-based to a degree of 3, and block cipher-based to a degree of 1 (just an example!). It is "almost" an MD construction, but not quite, so we may say it is MD to a degree of 2 or 3. The question is whether such a categorization will be more fair, more useful, etc., than a true/false categorization.
However, my personal opinion is that we should avoid completely to categorize hash functions (except in 100% objective ways such as internal state size, message block size, status in the competition etc. - some of which you may also argue are not 100% objective). I also think we should not deem hash functions as being "broken" or "damaged", we should just link to all published results, and let people make up their own minds. I am assuming we did not build the SHA-3 Zoo in an attempt to have an influence on NIST's decisions.
This is a draft for the new tables to show the analysis and complexities of each hash function. The first table is shown at the main page, the entries of the second table are only shown at the Wiki page of each hash function.
|Hash Name||Attacks on Main NIST Requirements||Attacks on other Hash Requirements||Attacks on Claims by Designer||Other External Cryptanalysis|
|Blue Midnight Wish||yes|
not in round 1, withdrawn or conseded to be broken by the designers
|Hash Name||Status||External Cryptanalysis|
caption for main table:
|Attacks on Main NIST Requirements||In this column the best attack on collision, 2nd-preimage and preimage resistant is shown. To give a quick overview of the complexity of the best attack, the cells are labeled with different colors.|
|Attacks on other Hash Requirements||Additional requirements for a hash function not unambiguously specified by NIST yet.|
|Attacks on Claims by Designer||Some designers specify additional requirements of their hash functions. Attacks on these requirements are shown in this column since they do not contradict the NIST call.|
|Other External Cryptanalysis||This column should give an overview which hash functions have no external cryptanalytic results yet.|
|color||Complexity of Result||Explanation|
|compr. calls < generic||The number of compression function calls is below generic attacks for collision, 2nd preimage or preimage. The complexity of the attack is very close to generic attacks and is therefore questionable.|
|compr. calls < generic - n||The number of compression function calls is below generic attacks reduced by a factor of n (hash size) for collision, 2nd preimage or preimage. Attacks in this category only consider compression function calls can be more expansive than generic attacks in terms of hardware costs. However, attacks of these type are not possible for SHA-2 and|
|time*memory < generic||The time*memory product is below generic attacks for collision, 2nd preimage or preimage.|
|practical example||A practical example is given for the attack on this hash function. This is an extra category since practical examples improve the confidence in an attack.|
|Hash Function Name||Type of Analysis||Hash Function Part||Hash Size (n)||Parameters/Variants||Compression Function Calls||Memory Requirements||Reference|
|Blender||semi free start collision||hash||all||-||-||[ Xu]|
|Blue Midnight Wish||near collision||hash||all||example||-||Thomsen|
|Dynamic SHA||length extension||hash||all||-||-||Klima|
|Dynamic SHA2||length extension||hash||all||-||-||Klima|
|Hash 2X||2nd preimage||hash||example||-||Aumasson|
|JH||pseudo 2nd preimage||compression||all||-||-||Bagheri|
|LUX||collision||reduced hash||224||3 blank rounds||-||-||Wu,Feng,Wu|
|LUX||near collision||reduced hash||256||3 blank rounds||-||-||Wu,Feng,Wu|
|LUX||slide-attack||hash||all||salt size: 31 mod 32||-||-||[ Peyrin]|
|Maraca||internal collision||internal state||512||2237||2230.5||Canteaut,Naya-Plasencia|
|MD6||non-randomness||reduced compression||18 rounds||?||?||Aumasson,Meier|
|MD6||key-recovery||reduced compression||15 rounds||?||?||Dinur,Shamir|
|Sarmal||preimage (salt size s)||hash||512||max(2512-s,2256+s)||2s||Nikolić|
|Sarmal||collision with salt||hash||224,256,384||2n/3||2n/3||[ Mendel,Schläffer]|
|SpectralHash||near collision||hash||224,512||reference impl.||example||-||Enright|
|SpectralHash||truncated collision||hash||512||reference impl.||example||-||Enright|
|Tangle||collision||hash||all||example, 213 - 228||-||Thomsen|
caption for individual tables:
A dash (-) in the individual table means that the complexities are neglible. A question mark (?) means the information is not given or unclear.
The "Parameters/Variants" column gives the parameters for attacks on reduced variants. If the column is empty, the attack is on the recommended parameters of the designers.
The "Type of Analyses" column is left white, if the attack is on reduced variants or parts of the hash function.
This looks fine to me. The only editorial aspect I'm a bit unsure of is the inclusion of rejected submissions on the same table; they are only reducing the S/N ratio, since they don't contribute anything to the ongoing SHA-3 process (and hence are not likely to received any further attention at least until the competition is over). I suggest moving them to an appendix table.