Difference between revisions of "Talk:The SHA-3 Zoo"

From The ECRYPT Hash Function Website
m
m (some refinements)
Line 172: Line 172:
  
 
main table:
 
main table:
 +
 +
The main idea of this table is to give a quick overview of cryptanalytic results. No additional judgment is given.
 +
  
 
{| border="1" cellpadding="4" cellspacing="0" align="center" class="wikitable" style="text-align:center"
 
{| border="1" cellpadding="4" cellspacing="0" align="center" class="wikitable" style="text-align:center"
Line 279: Line 282:
 
{| border="1" cellpadding="4" cellspacing="0" align="center" class="wikitable" style="text-align:center"
 
{| border="1" cellpadding="4" cellspacing="0" align="center" class="wikitable" style="text-align:center"
 
|- style="background:#efefef;"
 
|- style="background:#efefef;"
! width="120"| Hash Name !! width="110"| Status !! width="110"| External Cryptanalysis
+
! width="120"| Hash Name !! width="110"| Status !! width="110"| Cryptanalysis results
 
|-
 
|-
 
| [[Boole]]      || withdrawn || style="background:red" | collision
 
| [[Boole]]      || withdrawn || style="background:red" | collision
Line 318: Line 321:
 
! width="100"| color !! Complexity of Result !! Explanation
 
! width="100"| color !! Complexity of Result !! Explanation
 
|-
 
|-
| style="background:greenyellow"  |  || compr. calls < generic || The number of compression function calls is below generic attacks for collision, 2nd preimage or preimage. The complexity of the attack is very close to generic attacks and is therefore questionable.
+
| style="background:greenyellow"  |  || compr. calls < generic || The number of compression function calls (or equivalents) is below generic attacks for collision, 2nd preimage or preimage. The complexity of the attack is very close to generic attacks and is therefore of lesser relevance.
 
|-
 
|-
| style="background:yellow" | || compr. calls < generic - n    || The number of compression function calls is below generic attacks reduced by a factor of n (hash size) for collision, 2nd preimage or preimage. Attacks in this category only consider compression function calls can be more expansive than generic attacks in terms of hardware costs. However, attacks of these type are not possible for SHA-2 and
+
| style="background:yellow" | || compr. calls < generic - n    || The number of compression function calls is below generic attacks reduced by a factor of n (hash size) for collision, 2nd preimage or preimage. Attacks in this simple model neglect memory considerations. However, attacks of this type to not exist for the SHA-2 hash functions.
 
|-
 
|-
| style="background:orange" | || time*memory < generic    || The time*memory product is below generic attacks for collision, 2nd preimage or preimage.
+
| style="background:orange" | || time*memory < generic    || The time*memory product is below generic attacks for collision, 2nd preimage or preimage.  
 
|-
 
|-
 
| style="background:red" |    || practical example        || A practical example is given for the attack on this hash function. This is an extra category since practical examples improve the confidence in an attack.
 
| style="background:red" |    || practical example        || A practical example is given for the attack on this hash function. This is an extra category since practical examples improve the confidence in an attack.

Revision as of 13:06, 23 December 2008

I'm thinking about introducing another column to the list of submissions to provide a rough, overall classification of the candidates (e.g. classical Merkle-Damgaard vs. HAIFA vs. sponge vs. tree-based vs. streaming vs. ...), motivated by private messages I've got comparing the current SHA-3 Zoo with my old hash lounge.

However, finding the most appropriate category for some submissions may be a tough task; paradigms may be so distorted as to be nearly unrecognizable. Still, other candidates exhibit a much more transparent structure, and I think this information may be useful (e.g. comparing submissions that fall on distinct categories may not be as fair as comparing functions that share a high-level structure).

Would such a modification be welcome to the SHA-3 Zoo contributors?

Paulo.

I think this would be a lot of effort for a relatively minor added value; as you observe, many candidates are likely to use "uncategorizable" modes of operations. How one would classify CubeHash? It has similarities with a sponge constructions, but is not a sponge in general. Also, both MD6 and ESSENCE have a tree construction, but with different arities, parameters, etc. Finding the best tradeoff precision/readability seems difficult...

JP

Well, I don't see it as too much effort -- for me at any rate; I'm not asking that somebody else do the hard work ☺. Rather, I think it's part of trying to understand how each submission works, and it could also suggest lines of attack (particularly where the actual functions deviate from previously analyzed constructions). Besides, in cases where the authors disagree of a tentative category it might shed new light on those authors' original intent.

Paulo.

Addendum: as far as I could tell, the overall structure of the currently known proposals seems to be the following (disclaimer: I may be completely mistaken in many cases):

Hash Function Name Status External Cryptanalysis Tentative Classification
Abacus submitted none ? [?]
ARIRANG submitted none ? [?]
AURORA submitted none ? [?]
BLAKE submitted none HAIFA/? [narrow pipe]
Blender submitted none ? [?]
Blue Midnight Wish submitted yes sponge? [wide pipe]
Boole submitted streaming
Cheetah submitted none ? [?]
CHI submitted none Merkle-Damgård/Davies-Meyer [wide pipe]
CRUNCH submitted none Merkle-Damgård/concatenate-permute-truncate [narrow pipe]
CubeHash submitted yes sponge [wide pipe]
DCH submitted Merkle-Damgård/Miyaguchi-Preneel [narrow pipe]
Dynamic SHA submitted none ? [?]
Dynamic SHA2 submitted none ? [?]
ECHO submitted none ? [?]
ECOH submitted none ? [?]
Edon-R submitted yes streaming
EnRUPT submitted streaming
ESSENCE submitted none Merkle tree [narrow pipe]
FSB submitted none Merkle-Damgård/concatenate-permute-truncate [wide pipe]
Fugue submitted none sponge? [wide pipe]
Grøstl submitted yes sponge? Merkle-Damgård/Davies-Meyer? [wide pipe]
Hamsi submitted none ? [?]
HASH 2X submitted streaming?
JH submitted none sponge [wide pipe]
Keccak submitted none sponge [wide pipe]
Khichidi-1 submitted none ? [?]
LANE submitted none HAIFA/concatenate-permute-truncate or Damgård interleaving [narrow pipe]
Lesamnta submitted none ? [?]
Luffa submitted none ? [?]
LUX submitted none ? [?]
Maraca submitted none sponge? [wide pipe]
MCSSHA-3 submitted streaming
MD6 submitted yes bounded-height Merkle tree [wide pipe]
MeshHash submitted none ? [?]
NaSHA submitted none sponge? [narrow pipe]
SANDstorm submitted none ? [?]
NKS2D submitted cellular automaton
Ponic submitted yes streaming
Sarmal submitted yes HAIFA/Davies-Meyer [narrow pipe]
Sgàil submitted Merkle-Damgård/Davies-Meyer [wide pipe]
Shabal submitted none ? [?]
SHAMATA submitted none sponge [wide pipe]
SHAvite-3 submitted none ? [?]
SIMD submitted none ? [?]
Skein submitted none Merkle-Damgård/UBI? Merkle tree? [narrow pipe]
Spectral Hash submitted yes Merkle-Damgård/prism? [narrow pipe]
SWIFFTX submitted none HAIFA/concatenate-permute-truncate [wide pipe]
Tangle submitted none ? [?]
TIB3 submitted none ? [?]
Twister submitted none ? [?]
Vortex submitted yes Merkle-Damgård/Vortex-block? [wide pipe]
WaMM submitted sponge [wide pipe]
Waterfall submitted none streaming



I'm in favour of adding more infos to this page. Seems like a good first shot. But surely we have to put a disclaimer to this category saying something like "this column can never we entirely correct as we would need almost 64 categories...".

Regarding your current categorization. Why not distinguish designs that are based on a small number of permutations from designs based on a huge number of permutations (e.g. block-cipher based). This seems a crucial difference to me. On the other hand, do we really want to distinguish HAIFA from Merkle-Damgaard? The former is an extension of the later. Also, what is your way to distinguish between sponge and streaming?

-Christian

Oh, I'm definitely thinking about adding a disclaimer. Regarding HAIFA vs. MD, I wrote HAIFA when the authors explicitly state so in the documentation. I tend to call "sponge" a construction that inserts a message in "blocks" (related to the abstract design) in a "simple" way (e.g. via some block-oriented group operation), and "stream" a construction oriented toward "words" (related to popular target platforms) mixed into the state through a "complicated" operation (I admit this is rather informal to say the least); also, I again adhere to the authors' statement when they claim a design is streaming. As for permutations vs. block ciphers, I've been thinking about this... but perhaps it's better to discuss the subject privately before, so I can check my own understanding of a few concepts. And of course I'm entirely open to revising a classification if there is evidence of a mistaken prior assessment.

Paulo.

We can follow Orr and say that "everything is HAIFA" ;)

More seriously: more info would of course be valuable, but accurate information seems in this case difficult (and maybe impossible) to provide. All the functions are based on a compression function (whatever the designers say to sound original), then the variations are: how the iteration is performed? (linear or tree), how large is the state?, how many rounds are recommended and how many are broken? (it would be interesting to give this ratio, but often there's more than the "round" parameter, see eg CubeHash), are there additional inputs? (salt, key, counter, etc.).

The iteration mode seems to be linear in most of the submissions, so providing this info may not be that useful. However it could be interesting and easy to add a column "state bitsize". If we want to say how many rounds are broken, we'll reduce to the same problem as we have with the "external cryptanalysis" column with "what is broken".

JP

I just wish to say that the terminology about sponge sometimes seems to spread across things that are not sponge functions according to the definition in our paper Sponge Functions. I have not checked all the entries marked "sponge" in the table above, but I have some doubts about whether these hash functions actually use the sponge construction. For instance, I checked JH and it does not seem they use the sponge construction. Instead, they use MD and a compression function (built on top of a permutation). Also, RadioGatún seems to be sometimes described as a sponge function, when it is not, see [1].

Gilles

Hi, I agree with those saying that a categorization can never be exact. A possibility is to collect a list of headlines such as "Merkle-Damgård", "Sponge", "Block cipher-based", "Permutation-based" etc., and state an indication as to which degree each hash function can be said to fall into each category. As an example, we say that Grøstl is permutation-based, but as Paulo showed, it can also be seen as being block cipher-based, so on a scale from, e.g., 0-4, Grøstl may be permutation-based to a degree of 3, and block cipher-based to a degree of 1 (just an example!). It is "almost" an MD construction, but not quite, so we may say it is MD to a degree of 2 or 3. The question is whether such a categorization will be more fair, more useful, etc., than a true/false categorization.

However, my personal opinion is that we should avoid completely to categorize hash functions (except in 100% objective ways such as internal state size, message block size, status in the competition etc. - some of which you may also argue are not 100% objective). I also think we should not deem hash functions as being "broken" or "damaged", we should just link to all published results, and let people make up their own minds. I am assuming we did not build the SHA-3 Zoo in an attempt to have an influence on NIST's decisions.

/Søren

new tables

This is a draft for the new tables to show the analysis and complexities of each hash function. The first table is shown at the main page, the entries of the second table are only shown at the Wiki page of each hash function.

Martin


main table:

The main idea of this table is to give a quick overview of cryptanalytic results. No additional judgment is given.


Hash Name Attacks on Main NIST Requirements Attacks on other Hash Requirements Attacks on Claims by Designer Other External Cryptanalysis
Abacus 2nd-preimage
ARIRANG none
AURORA none
BLAKE none
Blender preimage
Blue Midnight Wish yes
Cheetah length-extension
CHI none
CRUNCH none
CubeHash preimage
DCH collision
Dynamic SHA length-extension
Dynamic SHA2 length-extension
ECHO none
ECOH none
Edon-R preimage
EnRUPT collision
ESSENCE none
FSB none
Fugue none
Grøstl yes
Hamsi none
JH preimage
Keccak none
Khichidi-1 collision
LANE none
Lesamnta none
Luffa none
LUX yes
MCSSHA-3 collision
MD6 yes
MeshHash 2nd preimage
NaSHA collision
SANDstorm none
Sarmal yes
Sgàil collision
Shabal none
SHAMATA yes
SHAvite-3 none
SIMD none
Skein none
Spectral Hash yes
StreamHash collision
SWIFFTX none
Tangle collision
TIB3 none
Twister collision
Vortex preimage


not in round 1, withdrawn or conseded to be broken by the designers

Hash Name Status Cryptanalysis results
Boole withdrawn collision
HASH 2X submitted 2nd-preimage
Maraca submitted yes
NKS2D submitted collision
Ponic submitted 2nd-preimage
WaMM withdrawn collision
Waterfall withdrawn collision


caption for main table:

column Explanation
Attacks on Main NIST Requirements In this column the best attack on collision, 2nd-preimage and preimage resistant is shown. To give a quick overview of the complexity of the best attack, the cells are labeled with different colors.
Attacks on other Hash Requirements Additional requirements for a hash function not unambiguously specified by NIST yet.
Attacks on Claims by Designer Some designers specify additional requirements of their hash functions. Attacks on these requirements are shown in this column since they do not contradict the NIST call.
Other External Cryptanalysis This column should give an overview which hash functions have no external cryptanalytic results yet.


color Complexity of Result Explanation
compr. calls < generic The number of compression function calls (or equivalents) is below generic attacks for collision, 2nd preimage or preimage. The complexity of the attack is very close to generic attacks and is therefore of lesser relevance.
compr. calls < generic - n The number of compression function calls is below generic attacks reduced by a factor of n (hash size) for collision, 2nd preimage or preimage. Attacks in this simple model neglect memory considerations. However, attacks of this type to not exist for the SHA-2 hash functions.
time*memory < generic The time*memory product is below generic attacks for collision, 2nd preimage or preimage.
practical example A practical example is given for the attack on this hash function. This is an extra category since practical examples improve the confidence in an attack.


Hash Function Name Type of Analysis Hash Function Part Hash Size (n) Parameters/Variants Compression Function Calls Memory Requirements Reference
Abacus 2nd preimage hash 2344 - Wilson
Abacus collision hash 2172 - Wilson
Abacus 2nd preimage hash 2172 - Nikolić,Khovratovich
Blender preimage hash all N*2n/2 - Mendel
Blender near collision hash all example - Klima
Blender semi free start collision hash all - - [ Xu]
Blender preimage hash all N*2(n+w)/2 - Newbold
Blue Midnight Wish near collision hash all example - Thomsen
Boole preimage hash all 29n/16 - Nikolić
Boole collision hash 224,256 example, 234 - Mendel,Nad,Schläffer
Boole collision hash 384,512 266 - Mendel,Nad,Schläffer
Cheetah length extension hash all - - Gligoroski
CubeHash observations all 8/1 Aumasson,Meier,Naya-Plasencia,Peyrin
CubeHash preimage hash 512 2511 2508 Khovratovich,Nikolić,Weinmann
CubeHash preimage hash 512 r/4 2496 - Khovratovich,Nikolić,Weinmann
CubeHash preimage hash 512 r/8 2480 - Khovratovich,Nikolić,Weinmann
CubeHash collision hash 512 2/120 example - Aumasson
DCH collision hash all 521 - Mendel,Lamberger
DCH preimage hash all 521 - Mendel,Lamberger
DCH collision hash all 245 245 Khovratovich,Nikolić
DCH preimage hash all 245 245 Khovratovich,Nikolić
DCH 2nd preimage hash 512 2450 ? Rechberger
Dynamic SHA length extension hash all - - Klima
Dynamic SHA2 length extension hash all - - Klima
Edon-R collision compression - - Khovratovich,Nikolić,Weinmann
Edon-R 2nd preimage compression - - Khovratovich,Nikolić,Weinmann
Edon-R preimage compression - - Khovratovich,Nikolić,Weinmann
Edon-R preimage hash 22n/3 22n/3 Khovratovich,Nikolić,Weinmann
Edon-R multicollision (2K) hash 256,512 K*2n/2 2n/2 Klima
Edon-R multi-preimage hash 256,512 ? ? Klima
EnRUPT preimage hash 512 2480 2480 Khovratovich,Nikolić
EnRUPT collision hash 256 example, 247 - Indesteege
Grøstl observation block cipher all Barreto
Hash 2X 2nd preimage hash example - Aumasson
JH pseudo collision compression all - - Bagheri
JH pseudo 2nd preimage compression all - - Bagheri
JH preimage hash all 2510.3 2510.3 Mendel,Thomsen
KhiChidi-1 collision hash 256 example - Mouha
LUX collision reduced hash 224 3 blank rounds - - Wu,Feng,Wu
LUX near collision reduced hash 256 3 blank rounds - - Wu,Feng,Wu
LUX free-start collision compression ? - - Wu,Feng,Wu
LUX free-start preimage compression ? 280 - Wu,Feng,Wu
LUX slide-attack hash all salt size: 31 mod 32 - - [ Peyrin]
Maraca internal collision internal state 512 2237 2230.5 Canteaut,Naya-Plasencia
MCSSHA-3 collision hash all 23n/8 ? Aumasson,Naya-Plasencia
MCSSHA-3 2nd preimage hash all 23n/4 ? Aumasson,Naya-Plasencia
MD6 non-randomness reduced compression 18 rounds ? ? Aumasson,Meier
MD6 key-recovery reduced compression 15 rounds ? ? Dinur,Shamir
MeshHash 2nd preimage hash 256 2192 - Thomsen
MeshHash 2nd preimage hash 512 2320 - Thomsen
NaSHA free-start collision compression all 232 ? Nikolić,Khovratovich
NaSHA free-start preimage compression 224,256 ~2128 ? Nikolić,Khovratovich
NaSHA free-start preimage compression 384,512 ~2256 ? Nikolić,Khovratovich
NaSHA free-start collision compression all - - Ji,Liangyu,Xu
NaSHA collision hash 512 2192 ? Ji,Liangyu,Xu
NKS2D collision hash 224 example - De Cannière
NKS2D collision hash 512 example - Enright
Ponic 2nd preimage hash 512 2265 2256 Naya-Plasencia
Sarmal preimage (salt size s) hash 512 max(2512-s,2256+s) 2s Nikolić
Sarmal collision with salt hash 224,256,384 2n/3 2n/3 [ Mendel,Schläffer]
Sgàil collision hash example - Maxwell
SHAMATA observation block cipher Fleischmann,Gorski
SHAMATA observation block cipher Atalay,Kara,Karakoc
SpectralHash near collision hash 224,512 reference impl. example - Enright
SpectralHash truncated collision hash 512 reference impl. example - Enright
SpectralHash collision hash reference impl. example - Bjørstad
StreamHash collision hash all n/2*2n/4 ? Khovratovich,Nikolić
StreamHash preimage hash all n/2*2n/2 ? Khovratovich,Nikolić
StreamHash collision hash 256 example - Bjørstad
Tangle observation Esmaeili
Tangle collision hash all example, 213 - 228 - Thomsen
Twister pseudo collision compression all 226.5 228 Mendel,Rechberger,Schläffer
Twister collision hash 512 2252 - Mendel,Rechberger,Schläffer
Twister 2nd preimage hash 512 2448 264 Mendel,Rechberger,Schläffer
Vortex pseudo collision compression all 2n/4 - Knudsen,Mendel,Rechberger,Thomsen
Vortex preimage hash all 23n/4 2n/4 Knudsen,Mendel,Rechberger,Thomsen
Vortex collision hash 256 2122.5 2122.5 Knudsen,Mendel,Rechberger,Thomsen
Vortex observation all Aumasson,Dunkelman
Vortex correlation analysis hash all - - Ferguson
WaMM collision hash all example - Wilson
Waterfall collision hash all 270 - Fluhrer


caption for individual tables:

A dash (-) in the individual table means that the complexities are neglible. A question mark (?) means the information is not given or unclear.

The "Parameters/Variants" column gives the parameters for attacks on reduced variants. If the column is empty, the attack is on the recommended parameters of the designers.

The "Type of Analyses" column is left white, if the attack is on reduced variants or parts of the hash function.


This looks fine to me. The only editorial aspect I'm a bit unsure of is the inclusion of rejected submissions on the same table; they are only reducing the S/N ratio, since they don't contribute anything to the ongoing SHA-3 process (and hence are not likely to received any further attention at least until the competition is over). I suggest moving them to an appendix table.

Paulo.