========================================================================
Eurocrypt 2013 review:
> I suspect everyone knows this paper; as its history section indicates, it's
> been in circulation, in some form, since March 2012. I believe the paper is
> important and well written, and I feel strongly that it should be accepted.
>
> The time the paper had ripening seems to have done it a lot of good: when I
> read a version of this many months ago, it had a many fewer results and many
> more rough edges. It came across as a less constructive and scholarly than it
> does now.
>
> The underlying theme here is that people who have done concrete security
> analyses and expressed results using Adv(resource-list) sort of language have
> routinely failed, when choosing how to state their theorems and give numerical
> estimate, to account for the impressive power of non-uniformity and
> pre-computation. I believe the authors have made this point convincingly.
>
> The claim has been made at some level in several other works, and the paper
> does a good job of laying down the history. But I believe that the story has
> never been told as cogently as here, nor with a comparable breadth of
> worked-out examples, and in both symmetric and asymmetric settings.
>
> In terms of writing, the overall tone of this paper now seems OK. The phrase
> "circle the wagons" is an exception and should be eliminated. Other instances
> of "cuteness" in the writing -- like the title, the presence of a FAQ,
> questions Q.5 and Q16 on it, it's being called appendix Q (following
> appendices A, B, C) -- all seemed fine by me.
>
> If the authors seriously hope people will adopt the AT metric they will need
> to teach the reader something more about how to do an accounting in this way,
> and they would have to do much more to convince that this metric is actually
> "right" and better than whats done now. I find it highly unlikely this would
> catch on in our community, and, worse, I don't think that the authors have
> made the case that this is the right way to go. Furthermore, as the authors
> point out, when one gives explicit reductions, one doesn't need to deal with
> explicit resource measures at all. Nor do the examples the authors have
> worked through evidence that the power of free precomputation might not be
> large with respect to an AT accounting, too. Likely one can find examples
> more convincing than PRP attack for probabilities below 2^{-K/4}U^{-1/2} where
> precomputation buy you a lot regardless of how you measure the subsequent
> work.
>
> The paper acknowledges but does not emphasize that which limits its impact:
> that the vast majority of reductions in the literature are constructive. The
> authors imagine that hundreds or thousands of papers need to be revised. But
> this is an overstatement; all that really needs to happen is that people
> attend more closely to non-constructive arguments, identifying where those
> have occurred in the past and doing more in the future to be explicitly
> constructive.
>
> Some additional comments: (1) At the top of section 2, you are saying that the
> insecurity is the probability of some "success" event for the adversary. As
> you well know, it is, at least as often, the difference of two probabilities
> (or a rescaled version of the first). (2) p. 5, line last-4: I think you
> should speak in terms of the "advantage", not "insecurity", that an adversary
> succeeds. (3) p. 6: The funky constant "7" - adding in "in little-endian
> binary form"?! (4) The graphs are unreadable in my black-and-white printout
> (although they look better in color). Additional labels on the axises might
> make these graphs look more meaningful. (5) p. 18: I believe the
> "constructive" is better known in our community than "effective" /
> "effectivity" for what you mean in C.4. And the term is used in mathematical
> circles, too. I'm not sure why you prefer to call this "effectivity", but you
> better mention right away, rather than leave this to a FAQ, that you are
> speaking of giving constructive proof and attending to the efficiency of the
> construction. (6) If there's a place where elaboration would help, it is in
> C.6, stating multiple theorems in the language that you advocate. Eg, you
> started off with the "CBC-MAC" theorem of BKR. It would be good to end by
> demonstrating how you would restate this theorem. (7) The "sustainability"
> argument in Q.7 seemed seriously off (as well as out of place): the fact that
> we only get 1361 w/m^2 of solar energy, or that oil drilling is not
> sustainable, didn't make for a remotely convincing argument against $VT$. (8)
> You didn't exactly "finish" the story of Q15, if a constructive version of
> Bolzano-Weierstrass was found.
Our comments:
> I suspect everyone knows this paper; as its history section indicates,
> it's been in circulation, in some form, since March 2012. I believe
> the paper is important and well written, and I feel strongly that it
> should be accepted.
>
> The time the paper had ripening seems to have done it a lot of good: when I
> read a version of this many months ago, it had a many fewer results and many
> more rough edges. It came across as a less constructive and scholarly than it
> does now.
>
> The underlying theme here is that people who have done concrete security
> analyses and expressed results using Adv(resource-list) sort of language have
> routinely failed, when choosing how to state their theorems and give numerical
> estimate, to account for the impressive power of non-uniformity and
> pre-computation. I believe the authors have made this point convincingly.
>
> The claim has been made at some level in several other works, and the paper
> does a good job of laying down the history. But I believe that the story has
> never been told as cogently as here, nor with a comparable breadth of
> worked-out examples, and in both symmetric and asymmetric settings.
Thanks.
> In terms of writing, the overall tone of this paper now seems OK. The phrase
> "circle the wagons" is an exception and should be eliminated. Other instances
> of "cuteness" in the writing -- like the title, the presence of a FAQ,
> questions Q.5 and Q16 on it, it's being called appendix Q (following
> appendices A, B, C) - all seemed fine by me.
We're changing "Response 1: circle the wagons" to "Response 1: keep the
definitions and abandon the conjectures".
> If the authors seriously hope people will adopt the AT metric they
> will need to teach the reader something more about how to do an
> accounting in this way,
We recommend learning this, yes, but why should one ask _this_ paper
to include an introductory course on thirty-year-old algorithm-analysis
concepts used in thousands of papers? Even if repeating the material
here were desirable, we simply don't have the space. We're adding a FAQ
entry recommending that the reader study the classic Brent--Kung paper.
> and they would have to do much more to
> convince that this metric is actually "right" and better than whats
> done now. I find it highly unlikely this would catch on in our
> community, and, worse, I don't think that the authors have made the
> case that this is the right way to go.
There's a huge literature _already_ using this metric. The metric
eliminates all of our infeasible high-probability attacks, and also has
several other positive features reviewed in the paper.
> Furthermore, as the authors
> point out, when one gives explicit reductions, one doesn't need to deal with
> explicit resource measures at all.
No, that's _not_ what we say. We recommend _factoring_ theorems into two
parts, where one factor is independent of cost metric, but this does not
mean that we're throwing away the other factor. It's essential to
measure the cost of each reduction; otherwise one allows arbitrarily
expensive reductions, trivializing the entire subject of "provable
security" beyond the corner case of information-theoretic security. Of
course, for non-concrete "provable security" one simply evaluates
whether a reduction is polynomial-time, but this is still a cost
measurement; for concrete "provable security" more work is required.
> Nor do the examples the authors have
> worked through evidence that the power of free precomputation might not be
> large with respect to an AT accounting, too. Likely one can find examples
> more convincing than PRP attack for probabilities below 2^{-K/4}U^{-1/2} where
> precomputation buy you a lot regardless of how you measure the subsequent
> work.
We have one analysis after another showing that the AT metric is
correctly declaring all of our high-probability attacks to be
infeasible. We draw a careful conclusion saying that this provides
"reason to hope" that AT fixes all the RAM pathologies but that there is
"an exception in one corner case". We ultimately recommend switching to
the AT metric because it captures "real limitations on communication
cost that are ignored by the RAM metric and the NAND metric".
The reviewer questions whether the metric is "right" and "better than
whats done now", and speculates that "likely" there are more
"convincing" examples than the corner case that we identify. Maybe, but
how would this contradict anything we say?
There's overwhelming evidence of AT being more realistic than RAM (or
NAND), for exactly the reason we say. We _don't_ claim that AT fixes all
the problems; we specifically recommend adding a second reality check,
namely constructivity (quantified in a way we explain), and we
specifically say that this seems to fix some corner cases that AT
_doesn't_ fix. Expanding the corner cases would have zero effect on this
analysis.
> The paper acknowledges but does not emphasize that which limits its impact:
> that the vast majority of reductions in the literature are constructive. The
> authors imagine that hundreds or thousands of papers need to be revised. But
> this is an overstatement; all that really needs to happen is that people
> attend more closely to non-constructive arguments, identifying where those
> have occurred in the past and doing more in the future to be explicitly
> constructive.
The reviewer seems to agree with us that it's important to identify the
non-constructive arguments in existing papers. Our recommendation is to
do this by checking the proofs. Does the reviewer have a suggestion for
identifying non-constructive arguments _without_ doing all this work to
check the proofs?
When an existing proof is found to be constructive, our recommendation
is to record this fact as a theorem (or, better, two theorems as
explained in the "Recommendations" subsection), so that future work can
simply read and use the theorem rather than having to dig once again
through the proof. Does the reviewer have a suggestion for achieving the
same result with less effort?
The virtue of theorems is perhaps more obvious in pure mathematics,
where there is a long tradition of theorems being used as steps in
proofs of subsequent theorems, the same way that programmers can simply
call a function rather than having to read through all the code that
implements the function. Theorems, like functions, vary in how reusable
they are, and often a proof is revisited several times for adaptations
and generalizations of the original theorem, but there is also a huge
amount of successful reuse---for example, many people use Hasse's
theorem about the number of points on an elliptic curve, but far fewer
people know how to prove the theorem. This reuse---having ideas captured
as theorems, not just in proofs---is critical for the depth and power of
pure mathematics. Even in newer areas that have not yet reached such
depth, it should be obvious that being able to reuse a theorem is more
cost-effective than forcing everyone to go through the proof.
> Some additional comments: (1) At the top of section 2, you are saying that the
> insecurity is the probability of some "success" event for the adversary. As
> you well know, it is, at least as often, the difference of two probabilities
> (or a rescaled version of the first).
We're adding a parenthetical note "or advantage in probability".
> (2) p. 5, line last-4: I think you
> should speak in terms of the "advantage", not "insecurity", that an adversary
> succeeds.
We are putting this word in quotes, since it's exactly the original
terminology from Bellare et al. But their terminology actually makes
sense: an attack has an "advantage" and a cipher has "insecurity" (or,
one hopes, security!). Other papers similarly ask about the "security"
of a cipher, not about the "advantage" of a cipher. There are papers
that use the notation "Adv" for both concepts but this seems to cause
unnecessary confusion.
> (3) p. 6: The funky constant "7" --- adding in "in little-endian
> binary form"?!
We're now specifying the little-endian convention much earlier---it's
needed earlier anyway---and removing the comment at this point.
> (4) The graphs are unreadable in my black-and-white printout
> (although they look better in color). Additional labels on the axises might
> make these graphs look more meaningful.
There are now more labels. The caption already describes each curve
positionally, not just with color.
> (5) p. 18: I believe the
> "constructive" is better known in our community than "effective" /
> "effectivity" for what you mean in C.4. And the term is used in mathematical
> circles, too. I'm not sure why you prefer to call this "effectivity", but you
> better mention right away, rather than leave this to a FAQ, that you are
> speaking of giving constructive proof and attending to the efficiency of the
> construction.
Our impression is that "effective" has a longer history, but we're happy
with "constructive" and are swapping the terms.
> (6) If there's a place where elaboration would help, it is in
> C.6, stating multiple theorems in the language that you advocate. Eg, you
> started off with the "CBC-MAC" theorem of BKR. It would be good to end by
> demonstrating how you would restate this theorem.
Okay, we'll do that.
> (7) The "sustainability"
> argument in Q.7 seemed seriously off (as well as out of place): the fact that
> we only get 1361 w/m^2 of solar energy, or that oil drilling is not
> sustainable, didn't make for a remotely convincing argument against $VT$.
We're adding an explanation of why the power limit causes trouble for
VT, and also an explanation of thermodynamic problems.
> (8)
> You didn't exactly "finish" the story of Q15, if a constructive version of
> Bolzano-Weierstrass was found.
The Bolzano--Weierstrass conclusion is false in archetypical models of
constructive mathematics. But this story is about the proof, not the
theorem.
========================================================================
Eurocrypt 2013 review:
> This paper points out major flaws in the foundation of provable security. It
> needs to be published in some form and researchers in provable security need
> to address the problems to shore up the foundations. However, too much of
> this paper's content is pointlessly combative, particularly the FAQ at the
> end. Such a FAQ is unusual, which is not necessarily a bad thing, but what is
> the point of questions 5 and 16? Couldn't we take the logic of Q7 a step
> further and declare all cryptography secure because breaking it seems to take
> at least n steps, but the universe has a finite amount of energy and can only
> perform O(1) operations?
>
> I'll dispense with all constant and log factors as is done in the paper. In
> section 3.4 we have a memory holding 2^n pre-computed iterates for the keys 0,
> ..., 2^n-1 as described in section 3.3. Let's assume we wish to recover
> U=2^{n/2} keys in parallel using 2^{n/2} processors. In the AT metric it is
> possible to connect 2^{n/2} processors to a single large memory that permits
> full-speed parallel accesses using total hardware interconnect size of 2^n
> (this can be shown with a 2-dimensional variation of the arguments used in
> Wiener 2004 JoC). The total hardware area for all components including
> memory, processors, and interconnect is 2^n. The time to complete the attack
> is 2^n. This gives AT cost of 2^{2n} to recover 2^{n/2} keys. So, the AT
> cost per recovered key is 2^{(3/2)n}, not 2^{2n} as claimed in the paper. It
> could be that I actually calculated the same thing as the authors in a
> different way, because they seem to roll the per key savings from parallel
> attack into the higher probability of success rather than into a lower cost
> per key attacked. It's hard to tell.
>
> It may be that the authors simply don't understand the results in Wiener 2004
> JoC. This might partially explain their insistence that this paper offers
> little and isn't worth referencing.
Our comments:
> This paper points out major flaws in the foundation of provable
> security. It needs to be published in some form and researchers in
> provable security need to address the problems to shore up the
> foundations.
Sounds positive so far.
> However, too much of this paper's content is pointlessly combative,
> particularly the FAQ at the end.
This comment is devoid of specifics. What exactly is "combative" in the
FAQ, or elsewhere in the paper?
Yes, the paper points out errors in previous papers. Yes, the first
question of the FAQ says that amortization-eliminates-precomputation is
a fallacy addressed in Section B.1.1; this is a fallacy that we have
heard many times, and that shows up yet again in this set of reviews.
Other parts of the FAQ also aim at correcting common errors, and we are
adding further FAQ entries on the basis of this batch of reviews.
Some touchy people might think that pointing out errors is "combative".
But pointing out errors is an essential part of science; this can't
possibly be an argument for rejection. It seems particularly important
to point out errors if we keep hearing those errors as responses to our
paper.
> Such a FAQ is unusual, which is not necessarily a bad thing, but what is
> the point of questions 5 and 16?
Are these questions "combative"? What exactly is the problem? The point
of those two questions, obviously, is to aid reading---dull papers put
people to sleep, interfering with communication. Note that another
reviewer specifically said that these two FAQ entries were fine.
> Couldn't we take the logic of Q7 a step
> further and declare all cryptography secure because breaking it seems to take
> at least n steps, but the universe has a finite amount of energy and can only
> perform O(1) operations?
What does O(1) have to do with the issues addressed in this paper? Does
the reviewer not understand the difference between asymptotic analysis
and concrete analysis? We'll insert a new FAQ entry to dispose of this.
> I'll dispense with all constant and log factors as is done in the paper.
No, that's not what is done in the paper. Disregarding constant factors
is obviously incompatible with discussing the security of AES-128, NIST
P-256, DSA-3072, RSA-3072, and every other specific cryptosystem that's
actually deployed. The paper does say that it suppresses "lower-order
terms" such as "the cost of hashing per bit" since "our concern is with
much larger gaps"; but this is a quantitative comparison, not a mindless
suppression of all constants.
> In section 3.4 we have a memory holding 2^n pre-computed iterates for
> the keys 0, ..., 2^n-1 as described in section 3.3.
No, that's not what Section 3.4 says. Given subsequent errors (see
below) it seems that the reviewer doesn't understand the relationship
between the attack parameter n and the key size K, even in the case of a
single target. Here's what the paper actually says:
3.2. ... K is the number of bits in the cipher key ...
3.3. Iteration. ... Choose an attack parameter n. ... This heuristic
analysis begins to break down as 3n approaches the key size K. The
central problem ... Assume from now on that n is chosen to be close
to K/3. ...
3.4. Multiple targets. Iteration becomes more efficient when there
are multiple targets: U ciphertexts ... for U independent uniform
random keys ... Look up each iterate in a table of 2^n U precomputed
keys. ... To avoid excessive chain collisions one must limit 2^n to
2^(K/3) U^(-1/3) ...
To summarize, what's actually present for the optimized attack in
Section 3.4 is a table of size 2^n U; the chains that led to the table
cover approximately 2^2n U keys; all of these numbers are much smaller
than 2^K.
One can of course also consider a table of size only 2^n, but it's still
important for 2^2n to be much smaller than 2^K. In the RAM metric this
restricted table size would hurt the overall performance of the attack;
that's why this is _not_ what Section 3.4 does. In the AT metric there's
no loss but there's also no benefit.
> Let's assume we wish to recover
> U=2^{n/2} keys in parallel using 2^{n/2} processors. In the AT metric it is
> possible to connect 2^{n/2} processors to a single large memory that permits
> full-speed parallel accesses using total hardware interconnect size of 2^n
> (this can be shown with a 2-dimensional variation of the arguments used in
> Wiener 2004 JoC).
It's not clear whether the reviewer understands that a chip of area 2^n
allows as many as 2^n parallel processing cores (disregarding low-order
terms). A computation with heavy communication, such as sorting 2^n
elements, takes time only 2^(n/2) on a chip of area 2^n. Even without
wire delays, this is optimal AT, as shown by the cited 1981 Brent--Kung
paper. Analogous 3-dimensional bounds appear in the cited 1983 Preparata
paper. Citing a 2004 paper for any of this is inappropriate.
In Section 3.4, the chip area is actually 2^n U. Computing 2^n iterates
of f from a single starting point takes time 2^n on a single processor.
Doing the same from U starting points takes time 2^n on U processors.
It's trivial to spread the results among 2^n U processors while they're
being computed. Sorting those results, together with the 2^n stored
iterates, takes time sqrt(2^n U), which compared to 2^n is negligible
under reasonable limits on U. In this context one can of course describe
the communication as running at "full speed" since local computation is
the bottleneck.
This computation has total AT cost 2^(2n)U. In effect it compares each
target key to 2^(2n)U keys in the chains covered by the table (as the
paper says: "the precomputation covers 2^(2n)U keys"). Each target key
is thus found with probability 2^(2n)U/2^K, i.e., AT/2^K. The chance of
finding _at least one_ target key is 2^(2n)U^2/2^K, i.e., ATU/2^K.
Repeating this low-probability attack for m separate tables in parallel
increases the cost by a factor of m and also increases the success
probability by a factor of m, always with a ratio of U/2^K. This is also
the conclusion of Section 3.4 of the paper:
Finding at least one of the U keys with success probability p costs
essentially (2^K/U)p.
If the chip area and table size were limited to 2^n then computing the
2^n U iterates would still take time 2^n on U processors (assuming U <=
2^n so those processors fit). There isn't space to store all 2^n U
results, so the computation needs to be chopped into a series of U
sorting steps, taking time U sqrt(2^n)---an unnecessary complication but
not a bottleneck. The AT cost drops by a factor of U to 2^(2n), but the
chance of finding at least one target key also drops by a factor of U to
2^(2n)U/2^K, since a table of size 2^n represents only 2^(2n) keys. The
ratio is still the same 2^K/U.
> The total hardware area for all components including
> memory, processors, and interconnect is 2^n.
With a table size of only 2^n, yes.
> The time to complete the attack is 2^n. This gives AT cost of 2^{2n}
> to recover 2^{n/2} keys.
No. The cost-2^(2n) computation imagined by the reviewer finds each
target key with probability only 2^(2n)/2^K; i.e., on average it finds
only U 2^(2n)/2^K keys. (The cost-2^(2n)U computation actually described
in the paper finds, on average, U^2 2^(2n)/2^K keys.)
How can the reviewer think that 2^(2n)/2^K is 1, i.e., that these 2^(2n)
precomputed keys cover the entire space of 2^K keys? Apparently the
reviewer thinks that 2^(2n) can be chosen as large as 2^K. But the paper
clearly says that n _cannot_ be so large, and explains why:
This heuristic analysis begins to break down as 3n approaches the key
size K. The central problem ... Assume from now on that n is chosen
to be close to K/3. ...
3.4. Multiple targets. ... To avoid excessive chain collisions one
must limit 2^n to 2^(K/3) U^(-1/3) ...
We don't understand how the reviewer could overlook these restrictions
on n.
> So, the AT
> cost per recovered key is 2^{(3/2)n}, not 2^{2n} as claimed in the paper.
The paper is completely clear in stating (as the conclusion of Section
3.4, which has only four paragraphs) that there is
a benefit in success probability from handling U keys together:
finding at least one of the U keys with success probability p costs
essentially (2^K/U)p.
For example, if there's only one target, then finding it with success
probability 1 costs 2^K; if there are U targets, then finding at least
one of them with success probability ~1 costs only 2^K/U; one can of
course repeat U times, obtaining most of the keys at cost only 2^K---
i.e., cost 2^K/U per recovered key.
The reviewer incorrectly believes that 2n=K (as discussed above), making
this cost 2^K/U per recovered key the same as what the reviewer thinks,
namely 2^(2n)/U per recovered key. What, then, does the reviewer claim
is wrong in the paper? Surely the answer is the following text, which
appears earlier in Section 3.4:
The 2^n U precomputed keys occupy area 2^n U, for a total cost on the
scale of 2^(2n) U, i.e., 2^(2n) per key.
With the reviewer's (bogus) choice 2n=K, this cost of 2^(2n) per key is
indeed not the same as the cost 2^K/U per key---but how can the reviewer
not notice that these "2^n U precomputed keys" are U times larger than
the reviewer's "2^n pre-computed iterates"? How does the reviewer manage
to miss the sentence
The precomputation covers 2^(2n) U keys instead of just 2^(2n) keys
in the previous paragraph, or the "table of 2^n U precomputed keys" in
the statement of the attack?
The text says nothing at all to support this reviewer's bogus 2n=K
notion. It makes one statement after another after another that's
clearly incompatible with 2n=K: hypotheses, warnings, conclusions.
Somehow the reviewer manages to miss this again and again and again.
_Finally_ the reviewer notices that 2n=K is inconsistent with one of
our conclusions---but even this doesn't prompt the reviewer to imagine
that 2n and K could possibly be different, and the reviewer _still_
doesn't bother to check our hypotheses. Instead the reviewer says that
our conclusions are wrong. Amazing.
> It
> could be that I actually calculated the same thing as the authors in a
> different way, because they seem to roll the per key savings from parallel
> attack into the higher probability of success rather than into a lower cost
> per key attacked. It's hard to tell.
The paper clearly says, as the conclusion of Section 3.4, that there is
a benefit in success probability from handling U keys together:
finding at least one of the U keys with success probability p costs
essentially (2^K/U) p.
Thanks to the linear relationship between cost and success probability,
one can view the U factor here as either a U-fold decrease in cost for
the same success probability or a U-fold increase in success probability
for the same cost. We're now switching this to the high-probability case
for all keys simultaneously---
finding all U keys costs essentially 2^K, i.e., 2^K/U per key
---which is slightly simpler than the general case; but all of these
conclusions follow trivially from our attack description. We're
including some extra arithmetic steps in the attack analysis to increase
the number of double-checking opportunities for readers.
> It may be that the authors simply don't understand the results in
> Wiener 2004 JoC. This might partially explain their insistence that
> this paper offers little and isn't worth referencing.
We are of course careful to discuss relevant history---for example, the
1980 Hellman paper and the 1981 Brent--Kung AT paper are quite relevant
here. This reviewer is under the delusion that our analysis is wrong,
and that we could fix this by applying algorithm-analysis tools from
some 2004 paper---but in fact our analysis is correct, and we don't need
any tools beyond the thirty-year-old papers.
It's still completely unclear where the reviewer acquired the silly
notion that 2n=K. What's amazing is that the reviewer manages to push
this mistake so far:
* managing to skip past two completely explicit hypotheses that each
clearly disallow 2n=K;
* managing to skip past the algorithm-analysis paragraph clearly
stating that n cannot be so large;
* managing to skip past further algorithm-analysis statements that
are clearly incompatible with 2n=K;
* finally noticing an incompatibility of 2n=K with one of our
conclusions---and claiming that _we_ made a mistake; and
* concluding that we ought to cite some 2004 paper.
The bigger picture is that our paper doesn't actually need this
generalization to multiple-target attacks---the main point of the paper
is already amply supported by the single-target attacks in this section
and in other sections. The generalization is useful mainly because it
illustrates the huge differences between batching and precomputation;
confusion between these two concepts is, in our experience, one of the
main reasons for careless readers to dispute our main point. This
particular reviewer doesn't seem to carry this confusion but does start
from an absurd misunderstanding of this generalization, and on this
basis manages to go on and on sounding quite negative about the paper.
How can a paper defend itself against such a sloppy reviewer? It's not
as if there's some text that's responsible for the bogus 2n=K notion
and that we could fix---it's completely unclear where the reviewer
obtained this notion in the first place. We also have one statement
after another that's clearly and indisputably incompatible with the 2n=K
notion, and the reviewer somehow didn't understand any of those
statements.
========================================================================
Eurocrypt 2013 review:
> In 1980, Hellman published his well-known Time-Memory trade-off which showed
> the existence of a time/memory 2^{2k/3} key-recovery attack on any
> blockcipher
> with k-bit keys. This is 2^{86} for AES, much better than exhaustive
> key-search
> or known attacks. However from a practical perspective this attack has been
> ignored because Hellman's proof is non-constructive. It shows the attack
> EXISTS but not how to find it.
>
> Fiat and Naor (1999) generalize this. De, Trevisan and Tulsiani (CRYPTO
> 2010) provide similar attacks on PRGs. The submission extends this to certain
> elliptic curves, DSA and RSA. (They do not cite the CRYPTO 2010 paper but
> they do refer to its 2009 ECCC version.)
>
> The fact that there exist attacks better than we can actually find and
> implement has been well known for a while, and, as the authors themselves
> say,
> of no particular practical relevance. So why renewed attention to this theme?
> The authors say that these attacks disprove certain conjectures made in
> theoretical papers about the security of, say, AES.
>
> That's true, and it is a good point. But what are its implications? Here the
> discussion of the paper is incomplete and unclear, and as a result the paper
> is
> misleading, leaving the impression that something is hugely wrong with the
> theory. This impression is false.
>
> The bottom line is the following. The assurance we get from the bulk of
> security proofs in the practice-oriented theory literature is not changed
> ONE BIT by this paper. It remains exactly what it was, bounds included.
>
> How can this be? Because I was careful above to talk of the assurance one
> gets
> from the proofs, not the theorem statements.
>
> Some (not all, or even most!) theorem statements in the literature use the
> notation, discussed in the paper, of advantage maximized over all adversaries
> of certain resources. But the maximization covers attacks one cannot
> effectively find. This means that if you attempt to get conclusions directly
> from the theorem statements, you will get quantitatively weaker ones than you
> would like. This (in my view) is the actual import of the paper.
>
> But almost all proofs in the practice-oriented literature provide uniform,
> blackbox reductions. Any attack on the scheme (eg. CBC-MAC) translates into
> an
> attack on the primitive (eg. AES) of the complexity stated in the theorem and
> proof. Attacks that one cannot find thus become irrelevant, and the
> conclusions
> one draws are in fact exactly the ones one expects and wants from the bounds.
>
> For example, the paper begins by talking of the belief that an attacker
> limited
> to 2^{100} operations and 2^{50} message blocks cannot break AES-CBC-MAC with
> probability more than 2^{-20}. Fact is that the assurance existing proofs
> provide about this claim is entirely unchanged by the contents and
> arguments of this paper.
>
> That is NOT the message you would get from the paper. The paper says ``but
> undermining these conjectures also means undermining all of the security
> arguments that have those conjectures as hypotheses.'' This is not true. I
> do
> not dispute that the conjectures made in some papers are false. But the
> conclusions that we want to draw, from a practical perspective, remain
> in most cases the same as prior to this paper.
>
> The exception is proofs underlain by non-uniform complexity. But in the
> practice-oriented literature this situation is rare. (The only example I
> know
> is Bellare's 2006 proof for HMAC.) Even in this case, the proof and approach
> are not invalidated. It is just that the numerical guarantees one can
> conclude
> are not at as good as one would like.
>
> Another fact that one does not learn from the paper is that use of
> the offending notation is not so widespread, and in fact later papers
> have tended to state theorems directly in terms of adversary advantage,
> saying that given A we can build B so that the advantage of A is bounded
> above in a certain way by the advantage of B. In this case (given that
> the derivation of B from A is constructive and blackbox), the issue
> raised by the paper would not appear to arise.
>
> I would state the point of the paper as follows. That if one wants to obtain
> practical bounds from the theorem statements, the maximum advantage notation
> should be avoided. That we should, as a community, consider if there is some
> replacement. (No convincing replacement emerges from the paper.) These issues
> of language are important, but they should not displace the bottom line,
> which
> is that the bulk of proofs are still giving us the assurance of their stated
> bounds. The paper does not make this clear and indeed leaves a reader with
> the
> opposite impression. The result is to spread mud on provable security in a
> broad and unjustified way.
>
> This paper does make a good point and one that has surprisingly escaped
> attention for years. (The point relies on no more than Hellman's 1980 results
> which were known at the time BKR appeared.) And I believe our field is
> open to, and receptive of, criticism. But criticism should be part of
> a fair, complete and balanced picture. The paper does not provide
> this.
>
> Some might suggest that it is the job of others to provide the complete and
> balanced perspective lacking in this paper. I disagree. It is the job of any
> scientist, in critiquing, to provide such a perspective themselves, at least
> to
> some, limited extent. Otherwise, the field is going to degenerate into a
> series
> of arguments.
Our comments:
> In 1980, Hellman published his well-known Time-Memory trade-off which
> showed the existence of a time/memory 2^{2k/3} key-recovery attack on
> any blockcipher with k-bit keys. This is 2^{86} for AES, much better
> than exhaustive key-search or known attacks. However from a practical
> perspective this attack has been ignored because Hellman's proof is
> non-constructive. It shows the attack EXISTS but not how to find it.
No, that's not even close to an accurate summary of the history. Let's
start with Hellman's abstract:
A probabilistic method is presented which cryptanalyzes any N key
cryptosystem in N^(2/3) operations with N^(2/3) words of memory
(average values) after a precomputation which requires N operations.
If the precomputation can be performed in a reasonable time period
(e.g., several years, the additional computation required to recover
each key compares very favorably with the N operations required by an
exhaustive search and the N words of memory required by table lookup.
When applied to the Data Encryption Standard (DES) used in block
mode, it indicates that solutions should cost between $1 and $100
each.
Hellman is completely clear about the precomputation cost (N), and the
fact that for DES this can be reasonably amortized over many targets.
Subsequent work improved the time and memory N^(2/3) to (N/U)^(2/3) for
U targets.
In a realistic cost model, Hellman's original attack is no better than a
simple brute-force key search, but the improved versions reduce cost by
a factor of U for U targets. The improved versions are widely cited and
taught, and are indisputably useful in practice (see, e.g., the public
A5/1 rainbow tables); saying that Hellman's attack "has been ignored" or
is "non-constructive" is absurd.
In the past twenty years one paper after another has conjectured that
there does not exist an attack better than 2^k. How could people be so
stupid as to make these conjectures, if those conjectures are
contradicted by an attack published in 1980? The answer is that the
contradiction isn't as obvious as some reviewers would like to pretend.
Yes, everyone understands this now that we've explained it (starting
almost a year ago), but this doesn't mean that it can be retroactively
inserted into the history.
> Fiat and Naor (1999) generalize this. De, Trevisan and Tulsiani
> (CRYPTO 2010) provide similar attacks on PRGs. The submission extends
> this to certain elliptic curves, DSA and RSA. (They do not cite the
> CRYPTO 2010 paper but they do refer to its 2009 ECCC version.)
We're adding a citation to the newer version. Note that "extends" is an
awfully brief way to describe a large part of the content of the paper;
the reviewer doesn't mention that we have several new algorithm
improvements and new cost analyses, and that our improvements are
critical for beating 2^128 for RSA-3072.
> The fact that there exist attacks better than we can actually find and
> implement has been well known for a while,
No, it hasn't, unless "a while" is defined to mean the time after our
announcement of these results last year.
There's one paper after another conjecturing that such attacks _don't_
exist. For example, we cite Bellare and Rogaway repeatedly conjecturing,
over the span of more than a decade, that such attacks don't exist. How
can the reviewer claim that the existence of such attacks "has been well
known for a while" when the literature is full of conjectures stating
exactly the opposite?
Clearly the reviewer understands _after_ reading our paper that these
conjectures are wrong (and that some of them were already contradicted
by Hellman's attacks), and clearly a large part of the community now
understands this, but how can the reviewer claim that the community
understood it _before_ our paper?
For collision attacks this has of course been known for much longer, but
the literature describes collision resistance again and again as a
special case. For example, Rogaway begins his "human ignorance" paper by
describing "The Foundations-of-Hashing Dilemma", with no indication that
the dilemma extends beyond hashing.
> and, as the authors themselves say,
> of no particular practical relevance. So why renewed attention to this theme?
> The authors say that these attacks disprove certain conjectures made in
> theoretical papers about the security of, say, AES.
> That's true, and it is a good point.
It's nice to see a reviewer acknowledging what the paper identifies as
its main contribution. Apaprently the reviewer understands that people
were conjecturing that such attacks don't exist, and understands that
we're disproving these conjectures, and agrees that this is a "good
point". But wait a minute---wasn't the reviewer claiming a moment ago
that it "has been well known for a while" that such attacks exist?
Perhaps what the reviewer is trying to suggest is that the conjectures
of nonexistence of such attacks are simply isolated errors by a few
authors. ("Certain" conjectures, the reviewer says; "theoretical"
papers, the reviewer says; each adjective makes the target sound more
limited.) Do we have to add citations to a wider range of authors making
the same bogus conjectures?
> But what are its implications? Here the discussion of the paper is
> incomplete and unclear, and as a result the paper is misleading,
> leaving the impression that something is hugely wrong with the theory.
> This impression is false.
Something _is_ wrong with the theory, and it's a problem for hundreds of
papers. The conjectures are wrong. The definitions need to be fixed. The
theorems need to be fixed. What exactly does the reviewer claim is
"incomplete" or "unclear"?
> The bottom line is the following. The assurance we get from the bulk
> of security proofs in the practice-oriented theory literature is not
> changed ONE BIT by this paper. It remains exactly what it was, bounds
> included.
Sounds impressive ("bottom line"; "not changed ONE BIT"), but let's
look at how narrow this bold-sounding statement actually is.
Does the reviewer claim that the conjectures are okay? No. How does the
reviewer suggest fixing the conjectures?
Does the reviewer claim that the definitions are okay? No. How does the
reviewer suggest fixing the definitions?
Does the reviewer claim that the theorem statements are okay? No. How
does the reviewer suggest fixing the theorem statements?
Does the reviewer claim that _all_ the proofs are okay? No---only "the
bulk of security proofs". What fraction is "the bulk"? What makes the
reviewer so confident about this fraction, anyway? And how does the
reviewer suggest fixing the _other_ security proofs?
Why is the "bottom line" this narrow statement about some proofs? Why is
the "bottom line" not the fact that the standard concrete-security
definitions are flawed and need to be replaced?
What's particularly puzzling is that the reviewer thinks that this
positive attitude towards the existing proofs (or rather the "bulk" of
the proofs) contradicts anything that the paper says. The paper never
claims---and we don't believe---that there's any serious problem with
most of the ideas in the proofs. The problem is everything _around_ the
proofs: bad theorem statements and bad conjectures, built on a
foundation of bad definitions.
A huge amount of the paper is devoted to analyzing ways to fix the
definitions and state proper theorems---which would be a pointless
endeavor if the proofs were fundamentally bad. This secondary goal is
already described in Section 1.1, but apparently the description was too
brief for this reviewer, so we're expanding it into a new Section 1.2.
> How can this be? Because I was careful above to talk of the assurance
> one gets from the proofs, not the theorem statements.
Here the reviewer appears to _admit_ that the theorem statements---
hundreds, perhaps thousands, of theorems in the literature!---are bad;
they don't mean what they were generally believed and frequently claimed
to mean. How can the reviewer condone this error?
The reviewer is attacking a strawman, incorrectly claiming that we're
criticizing the _proofs_; meanwhile the reviewer is ignoring the real
problems that the paper points out with the definitions, conjectures,
and theorems.
> Some (not all, or even most!) theorem statements in the literature use
> the notation, discussed in the paper, of advantage maximized over all
> adversaries of certain resources.
The reviewer seems to be drawing a superficial notational distinction
between "X has t-insecurity at most eps" and "X is (t,eps)-secure". This
is the "minor change in notation" described in the second paragraph of
the introduction. (We're now expanding this into a full paragraph
slightly later in the introduction.)
What matters is that concrete-security theorems use the _concept_ (with
whichever notation) that no time-t algorithms break X with probability
more than eps. This _concept_ has been completely standard in concrete
security for twenty years; this paper is the first paper to analyze and
recommend alternative concepts.
We're adding a brief literature sample to demonstrate how widespread
this concept is. To summarize, all 5 of the provable-concrete-security
papers at Crypto 2012 used exactly this concept.
> But the maximization covers attacks one cannot effectively find. This
> means that if you attempt to get conclusions directly from the theorem
> statements, you will get quantitatively weaker ones than you would
> like. This (in my view) is the actual import of the paper.
The gaps considered in the paper are quantitative, yes---and getting
the quantitative details right has always been a major theme of the
concrete-security literature, including many well-known papers that
target much smaller gaps in much narrower areas. We're adding a FAQ
entry on this, including a quote from Bellare--Desai--Jokipii--Rogaway
on the damage caused by getting the quantitative details wrong.
> But almost all proofs in the practice-oriented literature provide
> uniform, blackbox reductions. Any attack on the scheme (eg. CBC-MAC)
> translates into an attack on the primitive (eg. AES) of the complexity
> stated in the theorem and proof. Attacks that one cannot find thus
> become irrelevant, and the conclusions one draws are in fact exactly
> the ones one expects and wants from the bounds.
Really? What exactly _are_ these conclusions? For example, what exactly
is the conclusion that the reviewer draws regarding the security of
AES-CBC-MAC?
Before our paper everyone was running around making conjectures along
the lines of "no algorithm breaks AES with probability above t/2^128",
pointing to theorems that had these conjectures as hypotheses, and
drawing conclusions along the lines of "no algorithm breaks AES-CBC-MAC
with probability above (t+q^2)/2^128".
That conjecture is, however, _wrong_, and the theorem is _useless_ for
drawing that conclusion, and, worst of all, the conclusion is _wrong_.
What, then, are the reviewer's replacement conclusions?
Our paper puts a great deal of effort into proposing a proper answer to
this question. Yes, we're proposing changes to definitions of "secure"
that have been around for two decades, but the reviewer is proposing
abandoning the entire idea of defining the word "secure". Why is _giving
up_ on the formalization supposed to be better than _fixing_ the
formalization?
Compared to our fix-it approach, the reviewer's give-up approach is less
useful (the reviewer isn't even able to give definitions of the
"conclusions [one] wants") and doesn't save any work (either approach
requires pretty much every theorem in the literature to be checked and
restated). What's the advantage of the reviewer's approach?
> For example, the paper begins by talking of the belief that an
> attacker limited to 2^{100} operations and 2^{50} message blocks
> cannot break AES-CBC-MAC with probability more than 2^{-20}. Fact is
> that the assurance existing proofs provide about this claim is
> entirely unchanged by the contents and arguments of this paper.
Is the reviewer able to formalize what this claim means?
Before our paper the standard answer was yes. This was always one of
the pillars of the literature on provable concrete security. But we've
now shown that the formalization in the literature is flawed.
We say nothing to criticize the CBC-MAC _proof_ (and are now adding an
explicit statement that it's fine). But proofs talk to us through a
critical infrastructure of definitions and theorems---and, in the case
of "provable security", critical conjectures regarding the hypotheses of
the theorems---and the current situation is that this infrastructure has
a rather basic flaw that needs to be fixed.
> That is NOT the message you would get from the paper.
The message of _this_ paper is that the definitions are flawed, leading
to theorems that don't mean what they're claimed to mean and conjectures
that simply aren't true. Fortunately, this flaw appears to be fixable---
precisely _because_ most of the proofs are reasonably robust. Why is
this reviewer getting so defensive about a side of the literature that
we're not criticizing?
> The paper says ``but undermining these conjectures also means
> undermining all of the security arguments that have those conjectures
> as hypotheses.'' This is not true.
Of course it's true. If your argument has a false hypothesis then it
fails and you need to look for a new argument. Saying "Okay, here's a
new argument with the same conclusion" (or, worse, "in most cases
there's an argument with the same conclusion") doesn't mean that the
original argument was correct; the original argument was _not_ correct.
The reviewer's position here is completely out of whack with basic
logic.
> I do not dispute that the conjectures made in some papers are false.
> But the conclusions that we want to draw, from a practical
> perspective, remain in most cases the same as prior to this paper.
What exactly _are_ the security conclusions that the reviewer has in
mind? See above regarding formalization.
> The exception is proofs underlain by non-uniform complexity. But in
> the practice-oriented literature this situation is rare. (The only
> example I know is Bellare's 2006 proof for HMAC.)
Koblitz and Menezes have pointed out other examples. They also say that
Pointcheval used the HMAC result without (according to Pointcheval)
_realizing_ that it had a coin-fixing step. None of this means that the
problem is common, but it does illustrate the importance of actually
checking the proofs, rather than blithely assuming that things are fine.
> Even in this case, the proof and approach are not invalidated.
The reviewer is again attacking a strawman. _This_ paper is about flaws
in the _definitions_ and _conjectures_ and _theorems_, not about flaws
in the proofs.
> It is
> just that the numerical guarantees one can conclude are not at as good
> as one would like.
It's even worse than that: the numerical guarantees are then based upon
conjectures about a problem that _cryptanalysts haven't studied_. As our
paper says, "any conjecture in this model would be built upon quicksand".
Furthermore, even if the attacks in our paper are optimal, the resulting
quantitative change is huge---much worse than the tightness problems
studied in many concrete-security papers. Does the reviewer object to
those tightness papers because they're merely quantitative?
> Another fact that one does not learn from the paper is that use of
> the offending notation is not so widespread, and in fact later papers
> have tended to state theorems directly in terms of adversary advantage,
> saying that given A we can build B so that the advantage of A is bounded
> above in a certain way by the advantage of B. In this case (given that
> the derivation of B from A is constructive and blackbox), the issue
> raised by the paper would not appear to arise.
This "fact" is pure fantasy from the reviewer. As mentioned above, we're
adding a literature sample to demonstrate that using _definitions_ of
security---rather than abandoning the idea of making a definition, as
the reviewer advocates---is standard practice. All of these definitions
refer to the set of algorithms taking time <=t, triggering exactly the
problems analyzed in our paper.
Note that we do advocate stating modular theorems, but we _do not_
advocate abandoning formalization of security definitions.
> I would state the point of the paper as follows. That if one wants to
> obtain practical bounds from the theorem statements, the maximum
> advantage notation should be avoided.
Again the reviewer talks about "notation"---but what the reviewer has
actually been advocating is something much more radical, namely a
full-fledged abandonment of the long tradition of giving formal
definitions of concepts such as "AES has 128-bit security".
When the 1994 Bellare--Kilian--Rogaway definition of security turned out
to be flawed, did this reviewer advocate _changing_ the definition to
avoid "pathologies" (as 2000 Bellare--Kilian--Rogaway claimed to do), or
did this reviewer advocate abandoning the idea of giving definitions?
> That we should, as a community,
> consider if there is some replacement. (No convincing replacement
> emerges from the paper.)
The paper spends many pages analyzing various definitional changes and
showing that two of the changes _do_ stop all of the unrealistic
high-probability attacks discussed in the paper.
Exactly what part of this does the reviewer claim to not find
"convincing"? There's a remarkable lack of specifics in this part of the
review. Did the reviewer actually read this part of the paper?
> These issues of language are important,
There are huge differences between the set of cost-C algorithms in the
RAM metric and the set of cost-C algorithms in the AT metric. Describing
this as an "issue of language" is ridiculous. Similarly, effectivity (as
we define it) is a _real_ difference, not just a matter of notation.
> but they should not displace the bottom line, which is that the bulk
> of proofs are still giving us the assurance of their stated bounds.
See above regarding "bottom line".
> The paper does not make this clear and indeed leaves a reader with the
> opposite impression. The result is to spread mud on provable security
> in a broad and unjustified way.
Again the reviewer is attacking a strawman---claiming that this paper is
criticizing the proofs, when in fact the paper does no such thing.
Meanwhile the reviewer says essentially nothing to contradict what the
paper actually _does_ say.
> This paper does make a good point and one that has surprisingly
> escaped attention for years. (The point relies on no more than
> Hellman's 1980 results which were known at the time BKR appeared.) And
> I believe our field is open to, and receptive of, criticism. But
> criticism should be part of a fair, complete and balanced picture. The
> paper does not provide this.
There's nothing in the review indicating anything in the paper that's
incomplete, unclear, etc. The reviewer is simply going on and on
attacking a strawman.
> Some might suggest that it is the job of others to provide the
> complete and balanced perspective lacking in this paper. I disagree.
> It is the job of any scientist, in critiquing, to provide such a
> perspective themselves, at least to some, limited extent. Otherwise,
> the field is going to degenerate into a series of arguments.
What does this paragraph have to do with this paper?
This paper (1) points out a common error in the literature and (2)
analyzes strategies for fixing the error. This reviewer shows zero signs
of having read the second part but certainly understands, and credits
the paper for, the first part. This is a clear step forward---the error
wasn't understood before; now it's understood---and if there are
followup papers with further steps forward then of course those should
also be published.
This review is full of negative comments and loaded language---the
reviewer wants people to believe that the paper is incomplete and unfair
and unbalanced and prompting degeneration into arguments and so on---but
how, precisely, is this connected to the actual _contents of the paper_?
A huge amount of the paper is devoted to fixing the broken
infrastructure that surrounds the existing proofs---which would be silly
if we thought that the proofs were a lost cause---but apparently the
reviewer didn't read any of this. Having missed this rather basic point,
the reviewer complains again and again that the paper isn't going out of
its way to praise the proofs; for some reason the reviewer is quite
touchy about the proofs, and the reviewer can't seem to tell the
difference between pointing out flaws in _definitions_ and pointing out
flaws in _proofs_.
Hopefully the new Section 1.2 will help this reviewer understand the
positive role that the existing proofs play in this paper.
========================================================================
Eurocrypt 2013 review:
> In a recent paper revisiting HMAC security proofs, Koblitz and Menezes
> questioned the use of non-uniformity in security proofs.
> This submission shows more generally that various security estimates are
> likely
> to be significantly affected if one takes into account non-uniformity.
>
> I feel it is valuable to point out issues raised by non-uniformity
> in security proofs, which were arguably downplayed before
> (and perhaps even hidden), but the way it is done here
> may add to the confusion, rather than clarify it.
> For instance, even though the title of the submission includes "Non-uniform"
> and "precomputation", these two words are not even mentioned
> in the introduction, and no clear definition of "uniform" vs. "non-uniform"
> is done either. I think there is a clearer
> way to present the problems and discuss ramifications:
> in fact, the paper [*] seems to do a better job at that.
> Then the main contributions of this submission seems to boil down
> to presenting explicit non-uniform attacks on "standard" security estimates
> used by security proofs. It is good that these appear somewhere,
> and that people are aware of their existence,
> but from a cryptanalyst point of view, they are not very interesting:
> in some sense, these non-uniform attacks are not that different
> from the existence of trivial collision attacks on any fixed hash function,
> where the collision is encoded.
>
> It is a bit surprising that the Koblitz-Menezes paper [*] on non-uniformity
> is not cited here, and it would be useful to have a comparison of your
> contribution with that paper.
>
> Some of your views are rather questionable.
> First, it is acknowledged in [*] that most concrete security reductions
> are actually uniform, though some (many?) have been stated in a non-uniform
> way
> (e.g. the published theorem says something like "there exists"
> but the proof actually explicitly gives an algorithm).
> So the impact seems rather limited,
> but it can be hoped that it will convince people to state results uniformly.
> Second, if a theorem says "If A then B" and you show that assumption A
> is wrong, then that does not mean the theorem is wrong. And here,
> it even suffices to change the parameters to make A plausible.
>
> Finally, it seems that some (most?) of your non-uniform attacks are
> heuristic,
> in one way or another: it would be better to clarify which ones
> are heuristic or not, and to give a clear unambiguous formulation
> of the result.
>
> [*] Another look at non-uniformity
> http://eprint.iacr.org/2012/359
Our comments:
> In a recent paper revisiting HMAC security proofs, Koblitz and Menezes
> questioned the use of non-uniformity in security proofs.
> This submission shows more generally that various security estimates
> are likely to be significantly affected if one takes into account
> non-uniformity.
The main point stated in the paper isn't just that non-uniformity is
"likely" to affect "various security estimates", but that various
conjectures in the literature are flat-out _wrong_:
1.1. Contributions of this paper. Our primary goal in this paper is
to convincingly undermine all of the standard security conjectures
discussed above. ... The conjectures by Bellare, Kilian, and Rogaway
... are false for every reasonable assignment of the unspecified
constants.
Why does the reviewer not acknowledge this? Did the reviewer actually
read our paper?
> I feel it is valuable to point out issues raised by non-uniformity
> in security proofs, which were arguably downplayed before
> (and perhaps even hidden),
"Issues raised by non-uniformity in security proofs"? Is the reviewer
actually talking about _this_ paper, or about Koblitz--Menezes?
This paper is targeting _conjectures_ and _theorem statements_ built on
a foundation of subtly but indisputably flawed _definitions_. This is a
much bigger issue than what's in the _proofs_.
> but the way it is done here
> may add to the confusion, rather than clarify it.
Really? What exactly is the possible confusion? Is this reviewer
actually talking about _this_ paper?
> For instance, even though the title of the submission includes
> "Non-uniform" and "precomputation", these two words are not even
> mentioned in the introduction, and no clear definition of "uniform"
> vs. "non-uniform" is done either.
Wow. The reviewer doesn't know what "uniform" and "precomputation" mean?
Clearly such an amazing level of ignorance is incompatible with instant
comprehension of our title, but is there actually something _in_ the
paper that the reviewer found confusing?
> I think there is a clearer
> way to present the problems and discuss ramifications:
> in fact, the paper [*] seems to do a better job at that.
[ footnote: ]
> [*] Another look at non-uniformity
> http://eprint.iacr.org/2012/359
The paper [*] is a followup to this paper, so how can it be an argument
against publication of this paper? More to the point, what exactly is
the part of our paper that the reviewer didn't find clear?
If there's actually some lack of clarity in our paper and we can fix it
by taking ideas from the followup work, then of course we'll do that,
giving appropriate credit. But this reviewer's commentary is devoid of
specifics. Did the reviewer actually read our paper?
> Then the main contributions of this submission seems to boil down
> to presenting explicit non-uniform attacks on "standard" security
> estimates used by security proofs. It is good that these appear somewhere,
> and that people are aware of their existence,
> but from a cryptanalyst point of view, they are not very interesting:
The paper doesn't _say_ that these attacks are interesting for
cryptanalysts; it says exactly the opposite! Section 1.1:
We do not claim that this reflects any actual security problem with
AES, NIST P-256, DSA-3072, and RSA-3072, or with the higher-level
protocols built on top of these primitives. On the contrary! It is
clear that these attacks _exist_, but we conjecture that any fast
_construction_ of these attacks has negligible probability of
success. Users have nothing to worry about.
Did the reviewer actually read the paper?
> in some sense, these non-uniform attacks are not that different
> from the existence of trivial collision attacks on any fixed hash function,
> where the collision is encoded.
This is _exactly_ the issue covered by one FAQ entry: "Isn't it already
well known that there's a gap between attack algorithms that exist and
attack algorithms that can be constructed?"
The answer is that this _is_ well known for collision-resistance, but
collision-resistance was---until our paper!---viewed as an exceptional
case. The literature isn't full of flawed definitions and false
conjectures regarding collision-resistance; the literature _is_ full of
flawed definitions and false conjectures regarding many other security
notions.
> It is a bit surprising that the Koblitz-Menezes paper [*] on
> non-uniformity is not cited here, and it would be useful to have a
> comparison of your contribution with that paper.
That's a _followup paper_. Section 1.2 of our paper clearly states
priority dates of March and April 2012; the first version of the
followup paper was 24 June 2012; the followup paper cites ours.
We're adding a citation to the followup paper for the benefit of readers
who might not have seen it, but an overlap between the contributions
_cannot_ be a reason for rejecting our paper.
> Some of your views are rather questionable.
> First, it is acknowledged in [*] that most concrete security reductions
> are actually uniform, though some (many?) have been stated in a non-uniform
> way
> (e.g. the published theorem says something like "there exists"
> but the proof actually explicitly gives an algorithm).
> So the impact seems rather limited,
> but it can be hoped that it will convince people to state results uniformly.
Precisely which "views" in the paper does the reviewer think are being
contradicted here? Did the reviewer actually read our paper?
As for impact, there are many hundreds of papers giving concrete
security definitions that were _believed_ to be proper formalizations
but that all turn out to have the same basic flaw. This erroneous belief
is exactly what has led to one false conjecture after another. Does the
reviewer deny any of this? If not, how does the reviewer propose fixing
it?
> Second, if a theorem says "If A then B" and you show that assumption A
> is wrong, then that does not mean the theorem is wrong.
It means that the theorem is vacuous. Is this reviewer seriously trying
to defend theorems whose hypotheses are known to be false---trying to
claim that those theorem statements don't need to be fixed?
> And here,
> it even suffices to change the parameters to make A plausible.
The second FAQ entry says "Aren't we simply making the user safer if we
underestimate attack cost by ignoring precomputation?" and points to a
section disposing of this fallacy.
> Finally, it seems that some (most?) of your non-uniform attacks are
> heuristic, in one way or another: it would be better to clarify which
> ones are heuristic or not, and to give a clear unambiguous formulation
> of the result.
The paper clearly states that the analyses are heuristic:
1.1. Contributions of this paper. Our primary goal in this paper is
to convincingly undermine all of the standard security conjectures
discussed above. Specifically, Sections 3, 4, 5, and 6 show---
assuming standard, amply tested heuristics---that there _exist_
high-probability attacks against AES, the NIST P-256 elliptic curve,
The discussion of previous AES work highlights one attack in the
literature as being "proven", but this is unusual; essentially every
state-of-the-art cryptanalytic attack is heuristic, and most of the
attacks discussed in this paper are also heuristic.
To summarize, we clearly and prominently say that we are "assuming
standard, amply tested heuristics"---and then this reviewer says "it
seems that" our attacks are "heuristic". Did this reviewer actually read
the paper?
Perhaps the reviewer is trying to suggest that some of the attacks don't
actually work. We're adding Appendix V with computer verification of the
heuristic used in the first two AES attacks; there has been ample
computer verification of variants of Hellman's attack; and we're
pointing to a followup paper with computer verification of our ECC
attack.
========================================================================
Eurocrypt 2013 review:
> This paper illustrates some of the difficulties in formalizing security
> metrics in a "concrete" rather than asymptotic setting. Specifically, it notes
> that in many cases precomputation can be used to significantly lower the
> estimated cost of an attack. Yet most existing formal definitions regarding
> attack complexity in the non-asymptotic setting do not, in general, take
> precomputation into account. The authors propose to use the AT metric instead
> of circuit or RAM complexity.
>
> I find the paper thought provoking. Nevertheless, I am against acceptance for
> several reasons.
> 1) The power of precomputation has been known since at least the time of
> Hellman's "time-memory" paper. None of the new examples given here are very
> surprising on their own, though perhaps they would be worthy of publication at
> a second tier conference.
>
> Moreover, other difficulties of formalizing concrete security have been known
> at least since the observation that there is a 2(n+1)-time algorithm for
> printing collisions in any n-bit unkeyed hash function. (Just hardwire a
> collision.) This aspect of the problem was already dealt with in Rogaway's
> VietCrypt paper, and is not solved by moving to the AT metric.
>
> 2) Perhaps more problematic for this paper, I am left unconvinced that the AT
> metric solves the problems considered here in general. (Actually, I think even
> the authors would agree that it does not -- for example, the AT metric changes
> nothing with regard to the trivial hash-collision algorithm mentioned above.)
> But beyond that point, I am left confused by the authors' comment at the end
> of Section 3 (where they discuss the effect of rainbow tables) that "switching
> to the AT metric removes essentially all the benefit of precomputation." This
> is a strange argument in favor of the AT metric! Don't the authors think that
> precomputation is useful in practice for things like rainbow tables (granted,
> K is much smaller there, but the point remains unchanged)? If so, then the AT
> metric seems to be missing something also.
>
> 3) The technical novelty seems low, and the arguments in favor of the AT
> metric are not fully satisfying. Ultimately, then, we are left with a very
> interesting position paper. As I said at the beginning, I do find the work
> thought-provoking, I just don't think it's the kind of work that should be
> given a slot at Eurocrypt.
Our comments:
> This paper illustrates some of the difficulties in formalizing
> security metrics in a "concrete" rather than asymptotic setting.
> Specifically, it notes that in many cases precomputation can be used to
> significantly lower the estimated cost of an attack. Yet most existing
> formal definitions regarding attack complexity in the non-asymptotic
> setting do not, in general, take precomputation into account. The
> authors propose to use the AT metric instead of circuit or RAM
> complexity.
Actually, what we propose is "switching to the AT metric, adding
effectivity, and refactoring theorems to simplify further changes."
This is stated explicitly in Section 1.1 of the paper.
> I find the paper thought provoking. Nevertheless, I am against
> acceptance for several reasons.
> 1) The power of precomputation has been known since at least the time
> of Hellman's "time-memory" paper. None of the new examples given here
> are very surprising on their own, though perhaps they would be worthy
> of publication at a second tier conference.
If the power of precomputation is already understood then why do people
keep making conjectures that are false precisely because of the power of
precomputation? In the secret-key case, why have people---Bellare and
Rogaway, for example---continued issuing these bogus conjectures decades
after Hellman's paper?
The answer, of course, is that the power of precomputation _isn't_ as
easy to understand as this reviewer would like to pretend. We are the
first to point out the errors in these conjectures, except that the
low-probability case of the AES error was pointed out independently and
slightly earlier by Koblitz and Menezes.
We are also the first to point out plausible strategies for fixing the
underlying definitions. If the reviewer thinks it's so easy to formalize
the statement "NIST P-256 is secure" in a way that takes account of
precomputation, then what exactly is this formalization, and why can't
we find such security definitions anywhere in the literature?
> Moreover, other difficulties of formalizing concrete security have
> been known at least since the observation that there is a 2(n+1)-time
> algorithm for printing collisions in any n-bit unkeyed hash function.
> (Just hardwire a collision.) This aspect of the problem was already
> dealt with in Rogaway's VietCrypt paper, and is not solved by moving
> to the AT metric.
Section B.6 of the paper already discusses everything the reviewer is
saying here, but also points out a problem with Rogaway's approach:
In 2001, Stinson proposed studying explicit hash-function reductions
as a workaround for the separation between non-uniform collision
resistance (which is negligible) and uniform collision resistance
(which often appears to be very high); but Stinson's
collision-resistance theorems (e.g., [..., Theorem 3.1]) do not
actually make these reductions theorems and are trivialities as
stated. Rogaway in [...] analyzed the same proposal much more
carefully, and gave examples of nontrivial theorems about black-box
and non-black-box reductions; but these theorems were still
monolithic, handling probability together with a particular cost
metric and encapsulating the reductions inside the proofs, so
changing the cost metric is unnecessarily difficult.
Section B.6 recommends a different, more modular, style of stating
reduction theorems. Everyone agrees with Stinson's proposal to study
explicit reductions, but there are important differences between theorem
styles: the style recommended in Section B.6 is better than Rogaway's
style because it allows much easier changes of the cost metric, and
Rogaway's style is much better than Stinson's content-free style.
Of course, this short line of work does nothing to solve the problem of
finding a realistic formalization of the statement "SHA-512 is
collision-resistant"---a problem that remains unsolved today. Similarly,
this line of work does nothing to solve the problem of finding a
realistic formalization of statements such as "NIST P-256 is secure"---
a problem that was incorrectly believed to be solved many years ago and
that is shown in this paper to be much more subtle.
We're adding a FAQ entry on exactly this point. Obviously this reviewer
has read only parts of the paper, and a FAQ entry seems to be our best
hope of pointing the reviewer to the parts the reviewer is interested
in.
> 2) Perhaps more problematic for this paper, I am left unconvinced that
> the AT metric solves the problems considered here in general.
> (Actually, I think even the authors would agree that it does not --
> for example, the AT metric changes nothing with regard to the trivial
> hash-collision algorithm mentioned above.)
At no point does the paper claim that AT fixes _everything_. In fact,
Appendix B says exactly the opposite:
* Section B.3 says that our cost analyses "provide reason to hope"
that the AT metric fixes "essentially all of the pathologies"---
but it then says that there is "an exception in one corner case".
* Section B.4 says that adding effectivity appears to "eliminate some
corner pathologies were not eliminated by merely switching to the
AT metric".
The title of Appendix B is "Trying to salvage the insecurity metric";
note the word "Trying". The paper also discusses hash collisions, and in
particular already says that non-uniform collision resistance is
negligible.
In short, all of the reviewer's technical commentary about the AT metric
is already thoroughly discussed in the paper. Obviously this is another
case of reading only part of the paper, and again we're adding a FAQ
entry in the hope of pointing the reviewer to the right part.
What's completely unclear is why the reviewer thinks any of this is
"problematic for this paper". The paper already clearly states in
Section 1.1 that we recommend switching to the AT metric _and_ adding
effectivity. Appendix B ends by saying that our analyses "suggest" that
these recommendations "provide two levels of defense against the
unrealistic attacks considered in this paper". Perhaps more changes will
turn out to be warranted; Appendix B.6 explains how to structure
theorems to simplify those changes.
When a paper points out a huge problem in the literature, and recommends
some solutions while repeatedly acknowledging the potential limitations
of the solutions, how can someone object to publication by saying that
there are potential limitations of the solutions? This makes no sense.
Our recommendations are obviously a huge improvement in realism over the
standard definitions; the way to set the stage for further improvements
(if there are any) is to _publish_ this, not to suppress it.
> But beyond that point, I am left confused by the authors' comment at
> the end of Section 3 (where they discuss the effect of rainbow tables)
Well, no; we discussed Hellman tables. We could have drawn the same
basic conclusions from rainbow tables, but Hellman tables are simpler.
> that "switching to the AT metric removes essentially all the benefit
> of precomputation." This is a strange argument in favor of the AT
> metric! Don't the authors think that precomputation is useful in
> practice for things like rainbow tables (granted, K is much smaller
> there, but the point remains unchanged)? If so, then the AT metric
> seems to be missing something also.
The reviewer here is confusing non-uniform attacks with multiple-target
attacks. This is _precisely_ the fallacy analyzed in Section B.1.1.
What we analyze in Section 3.4 is one of many sensible ways to save time
by attacking multiple targets---an important real-world speedup. But the
cost reduction does _not_ come from precomputation; it comes from
batching! The change from Section 3.3 to Section 3.4, adding multiple
targets, is quite different from the effect of precomputation within
each section.
This particular fallacy is already highlighted as question 1 of our FAQ.
Hard to figure out what else we can do to fix this reviewer's ignorance;
if we move B.1.1 earlier somehow then we have to move something else
later, and we'll simply end up with sloppy readers missing _that_ part.
> 3) The technical novelty seems low,
Is "technical novelty" something different from novelty? Is the reviewer
trying to say that something identified as new here isn't actually new?
> and the arguments in favor of the
> AT metric are not fully satisfying.
"Arguments"? This is devoid of specifics. AT is one of several ideas
that the paper analyzes for fixing the security definitions, and it
turns out to correctly declare all of the paper's high-probability
attacks to be infeasible; this is the result of analyses spread
throughout the paper. It's totally unclear which, if any, of these
analyses the reviewer claims to be disputing.
> Ultimately, then, we are left with a very interesting position paper.
> As I said at the beginning, I do find the work thought-provoking, I
> just don't think it's the kind of work that should be given a slot at
> Eurocrypt.
Here we go again---a paper that points out indisputable errors in the
literature, and that analyzes several rescue strategies, is dismissed as
a "position paper". Aren't scientists supposed to prioritize accuracy,
and admit errors when they occur?
It's one thing for a reviewer to say that the paper isn't interesting
enough for Eurocrypt---but that's not at all what this reviewer is
saying. The reviewer is classifying this paper as a "position paper" and
saying that "position papers", no matter how interesting, don't belong
at Eurocrypt.
Wikipedia says that "Position papers in academia enable discussion on
emerging topics without the experimentation and original research
normally present in an academic paper." Is the reviewer trying to claim
that the contents of our paper aren't original research?
========================================================================
Eurocrypt 2013 review:
> Here is another discussion paper on the relevance of security definitions and
> cost metrics. It follows the well-known series "Another look" by Koblitz and
> Menezes with an attempt to shake some fundaments of modern cryptography.
> Unfortunately, the paper, though being written by other authors, inherits not
> only the advantages of K-M papers (new interpretations and fruitful
> discussions) but also the disadvantages: excessive jargons, provocative FAQ,
> and a superiority subtext.
>
> Nevertheless, I like the concept of the paper and would support the discussion
> on the topic. However, I do not like the means of this discussion, nor I think
> that the discussion papers are appropriate for conferences. Journals (for long
> conversations) or workshops (for short fights) would be more suitable.
>
> I also have some small comments. First, the AES "small complexity" attack is
> poorly described. The original Koblitz-Menezes argument in the HMAC paper
> picks up a message with a large deviation from mean, whereas this paper
> conducts an a bit different, but quite negligent analysis so that I am not
> sure that it is correct.
>
> Secondly, the AT metric is offered with few supportive arguments, so its
> practical relevance is arguable at least. Unfortunately, the text ignores most
> of criticism raised to this metric in the recent past, which includes, e.g., a
> discussion over static and dynamic memories and the fact that using up to
> gigabytes of memory comes with little overhead. A discussion paper should have
> considered more points of view.
Our comments:
> Here is another discussion paper on the relevance of security
> definitions and cost metrics.
This reviewer, like the previous reviewer, tries to avoid commenting on
the scientific merit of our paper by simply dismissing it as a
"discussion paper".
This paper shows that many previous conjectures are false. How can this
accomplishment be merely a "discussion paper"? Surely the reviewer
noticed the paper's statement of this contribution:
1.1. Contributions of this paper. Our primary goal in this paper is
to convincingly undermine all of the standard security conjectures
discussed above. Specifically, Sections 3, 4, 5, and 6 show---
assuming standard, amply tested heuristics---that there _exist_
high-probability attacks against AES, the NIST P-256 elliptic curve,
DSA-3072, and RSA-3072 taking considerably less than 2^128 time. ...
The conjectures by Bellare, Kilian, and Rogaway ... are false for
every reasonable assignment of the unspecified constants.
The reviewer can try arguing that we're wrong and that the conjectures
are actually correct; or that we're right but that some specific paper
already disputed the conjectures; or that we're right, and this is new,
but it isn't interesting enough to publish. Something is seriously
wrong, however, when the reviewer doesn't even _acknowledge_ the main
point of the paper.
> It follows the well-known series "Another look" by Koblitz and
> Menezes with an attempt to shake some fundaments of modern
> cryptography.
Why doesn't this reviewer acknowledge, and evaluate, what the paper
actually says?
> Unfortunately, the paper, though being written by other
> authors, inherits not only the advantages of K-M papers (new
> interpretations and fruitful discussions) but also the disadvantages:
> excessive jargons, provocative FAQ, and a superiority subtext.
First of all, why is this reviewer commenting on authorship?
Second, what exactly are the words that the reviewer claims to be
"jargon", and what exactly is the problem with the FAQ?
As for "superiority subtext", the reviewer is clearly straying rather
far from normal scientific discourse into vague emotional evaluations.
It's clear that this reviewer has some sort of problem with Koblitz and
Menezes, but it's not clear whether the reviewer actually has anything
to say about _this_ paper; it's not even clear that the reviewer spent
more than two minutes looking at this paper.
> Nevertheless, I like the concept of the paper and would support the
> discussion on the topic. However, I do not like the means of this
> discussion, nor I think that the discussion papers are appropriate for
> conferences. Journals (for long conversations) or workshops (for short
> fights) would be more suitable.
This paper disproves a range of conjectures in the literature---not to
mention all the other scientific contributions of the paper, such as the
first concrete analysis of the factorization factory. How can this not
be an appropriate topic for conferences? Again the reviewer is trying to
dismiss the paper as a "discussion paper" rather than evaluating the
paper's contributions to human knowledge.
> I also have some small comments. First, the AES "small complexity"
> attack is poorly described.
This comment lacks specifics, is completely non-constructive, and leaves
us entirely in the dark as to what the reviewer supposedly had trouble
with.
Did the reviewer have trouble understanding the (quite simple) attack
description? Or did the reviewer have trouble understanding the attack
analysis (which occupies the next several sentences and requires some
basic knowledge of statistics)? Or does the reviewer doubt something in
the analysis (we're now adding an appendix with computer verification)?
Or did the reviewer simply not bother reading the paper?
> The original Koblitz-Menezes argument in
> the HMAC paper picks up a message with a large deviation from mean,
> whereas this paper conducts an a bit different, but quite negligent
> analysis so that I am not sure that it is correct.
There are two different attacks here. Section 3.1, the simplest, is
limited to low probability and does _not_ need any precomputation. (The
lack of precomputation also allows computer verification, which is now
in Appendix V.) Section 3.2 uses much more precomputation to achieve
arbitrarily high probabilities. The independent Koblitz--Menezes attack
is essentially a limited special case of Section 3.2, using _some_
precomputation to achieve somewhat higher probabilities than Section
3.1.
The standard protocol for questioning an analysis is to point to the
first part of the analysis that the reader doesn't understand or doesn't
agree with---which would give us a basis for making progress, perhaps by
adding a more detailed explanation or identifying the reader's error.
A reviewer who simply waves vaguely at the whole thing and says "I'm
not sure about this" is violating this protocol---presumably because the
reviewer didn't spend any time trying to understand the analysis. It's
again quite unclear that this reviewer spent more than two minutes
looking at this paper.
> Secondly, the AT metric is offered with few supportive arguments, so
> its practical relevance is arguable at least.
Is the reviewer trying to dispute something in the paper? Did the
reviewer actually read what the paper says about AT?
> Unfortunately, the text ignores most of criticism raised to this
> metric in the recent past, which includes, e.g., a discussion over
> static and dynamic memories and the fact that using up to gigabytes of
> memory comes with little overhead.
First of all, the reviewer provides zero citations to the criticism that
we're allegedly ignoring.
Second, static RAM and dynamic RAM both consist of transistors and
wires, fitting without trouble into the AT model. They have somewhat
different performance characteristics; optimizing AT costs of various
computations produces quite reasonable choices between static RAM and
dynamic RAM. The reviewer provides no indication of why this is supposed
to indicate any problems with the AT model.
Third, the claim about "gigabytes of memory" shows a rather stunning
level of ignorance. State-of-the-art analyses of cryptanalytic ASICs
consistently conclude that optimal designs have a rather small amount of
memory per processing core---one widely cited example is the Crypto 2003
Shamir--Tromer TWIRL paper, but there are many more examples. Having
gigabytes of memory sitting around twiddling their thumbs, waiting for a
single processing core, is silly; a slightly larger circuit area stores
the same amount of memory and many more processing cores, producing a
chip with vastly better real-world performance, vastly lower AT cost,
and of course far less memory per core.
> A discussion paper should have considered more points of view.
A paper that clearly states its main claimed contribution is entitled to
reviews that acknowledge and evaluate the claim---which this review
completely fails to do. We don't see how we can defend the paper against
purely political reviews that ignore the scientific contributions of the
paper, or against committees that tolerate such reviews.
========================================================================
Asiacrypt 2012 review:
> This paper shows that standard definitions of security "there exists no fast
> adversary that can ..." are problematic, because in many cases, there exists
> much faster adversaries than usually considered. The basic idea is that any
> precomputation can be given for free by the existential quantifier.
>
> This is a nice contribution which deserve acceptance, because these issues
> really need to be seriously taken into account (which does not necessarily
> mean that they will be solved in a satisfactory manner. Indeed, the definition
> of collision freeness suffers from a very severe form of the problem which is
> well-known and has never really been solved).
>
> I am less convinced by the alternate metric proposal. First is there any real
> difference between AT and Full-cost (ok, it involves circuits instead of
> machines but it still seem very similar). Second, I am not sure that this
> really solve the problem and I have the impression that it confuses the issue.
Our comments:
> This paper shows that standard definitions of security "there exists no fast
> adversary that can ..." are problematic, because in many cases, there exists
> much faster adversaries than usually considered. The basic idea is that any
> precomputation can be given for free by the existential quantifier.
Right.
> This is a nice contribution which deserve acceptance, because these issues
> really need to be seriously taken into account (which does not necessarily
> mean that they will be solved in a satisfactory manner. Indeed, the definition
> of collision freeness suffers from a very severe form of the problem which is
> well-known and has never really been solved).
Thanks.
> I am less convinced by the alternate metric proposal. First is there any real
> difference between AT and Full-cost (ok, it involves circuits instead of
> machines but it still seem very similar).
"Full cost" appears to be a reinvention of VT, which is somewhat
different from AT. We're expanding the FAQ to address this.
> Second, I am not sure that this
> really solve the problem and I have the impression that it confuses the issue.
We spend huge parts of the paper analyzing the extent to which this
solves the problem. What exactly is the reviewer thinking? Impossible to
make progress if the reviewer doesn't provide more details.
========================================================================
Asiacrypt 2012 review:
> Sections 3 and up show how several popular cryptosystems can be attacked when
> combining standard cryptanalytic techniques published decades ago and free
> precomputation. This is, also from page 4, reasonably easy and in some cases
> has been done before. The sections after the introduction could be of
> interest
> for a 2nd tier conference.
>
> Explain how the AT metric compares to Wiener's full cost, which was as far as
> I know the first application of a similar metric in a similar context.
>
> The authors show convincingly that security proofs based on the existence of
> efficient attacks are problematic because such attacks exist even though they
> would require massive computation to find. This creates a mismatch between
> the theoretical security levels and security in practice. The fact that the
> authors show the existence of such impractical to find but efficient attacks
> for a wide range of cryptographic schemes makes the paper even more
> compelling. This paper is a clear accept.
>
> On the subject of attack cost metrics, none of the 3 considered in the paper
> is satisfactory, although this is not really a criticism of this paper; the
> authors are entitled to criticise previous work on security proofs using these
> same metrics used in the previous work. In Section B.6 the authors recommend
> switching to the AT metric, but this denies the 3D-ness of our world. For
> small circuits on a single chip the world looks 2D, but as we scale problems
> to larger sizes, circuits require many chips, boards, etc. Communication
> among chips takes advantage of the third dimension. So, a VT (volume * time)
> metric makes more sense asymptotically (with proper accounting for any added
> circuit volume that comes from high-speed parallel access to a large memory).
> My suspicion is that some of the pre-computation "problems" that go away by
> adopting the AT metric would resurface if we switch to VT. But, as the
> authors say "accurately modeling reality is more important than minimizing the
> number of changes required to the literature."
Our comments:
> Sections 3 and up show how several popular cryptosystems can be attacked when
> combining standard cryptanalytic techniques published decades ago and free
> precomputation. This is, also from page 4, reasonably easy and in some cases
> has been done before. The sections after the introduction could be of
> interest for a 2nd tier conference.
A followup to the ECC section, with extensive optimizations and computer
verifications and even some positive applications, appeared subsequently
at Indocrypt 2012. But of course the point of the paper is broader than
any single attack. Trying to split the paper into small pieces would
detract from the primary and secondary contributions of the paper.
> Explain how the AT metric compares to Wiener's full cost, which was as far as
> I know the first application of a similar metric in a similar context.
We're addressing this in the FAQ. The AT metric was formally defined in
the 1980s; it was used for cryptanalysis at least three years before
Wiener, if not earlier; and Wiener's metric actually appears to be VT
(also from the 1980s), which is much less realistic than AT.
> The authors show convincingly that security proofs based on the existence of
> efficient attacks are problematic because such attacks exist even though they
> would require massive computation to find. This creates a mismatch between
> the theoretical security levels and security in practice. The fact that the
> authors show the existence of such impractical to find but efficient attacks
> for a wide range of cryptographic schemes makes the paper even more
> compelling. This paper is a clear accept.
Sounds good.
> On the subject of attack cost metrics, none of the 3 considered in the paper
> is satisfactory, although this is not really a criticism of this paper; the
> authors are entitled to criticise previous work on security proofs using these
> same metrics used in the previous work.
Thanks.
> In Section B.6 the authors recommend
> switching to the AT metric, but this denies the 3D-ness of our world. For
> small circuits on a single chip the world looks 2D, but as we scale problems
> to larger sizes, circuits require many chips, boards, etc. Communication
> among chips takes advantage of the third dimension. So, a VT (volume * time)
> metric makes more sense asymptotically (with proper accounting for any added
> circuit volume that comes from high-speed parallel access to a large memory).
> My suspicion is that some of the pre-computation "problems" that go away by
> adopting the AT metric would resurface if we switch to VT. But, as the
> authors say "accurately modeling reality is more important than minimizing the
> number of changes required to the literature."
We're addressing this in the FAQ. Both VT and AT eliminate this paper's
high-probability attacks, but VT is not physically realistic.
========================================================================
Asiacrypt 2012 review:
> A recent paper by Koblitz and Menezes (KM) has sparked extensive
> discussion on the role of non-uniformity in concrete security. This
> paper attempts to add some technical elements to this ongoing
> discussion. In all fairness, the main issue is rather simple to
> explain: Some reductions in security proofs are non-constructive
> (something which, unfortunately, is not always made obvious), yet when
> concrete parameters are derived to assess the insecurity of a
> provable-secure scheme, authors tend to plug in numbers referring to
> the best *explicit* (i.e. constructive) cryptanalytic attack. The
> problem is that one can prove non-constructively the existence of
> cryptanalytic attacks which outperform the best explicit cryptanalytic
> attacks, and there is a common understand among cryptanalysts that
> attacks which cannot be found can be safely ignored. Ideally, one
> should measure the complexity of attacks in terms of the complexity of
> running them *and* of finding the attack itself.
>
> In short, the contributions of this paper can be summarized as first,
> reiterating this story with some further detail, and second,
> presenting some concrete examples of non-constructive attacks which
> outperform the best explicit cryptanalytic attacks.
>
> Before I turn to discussing the different attacks presented in the
> paper, let me first address some more high level points about this
> work. First off, I must admit being unsure whether this paper deserves
> credit for identifying the problem: Following KM's paper, there has
> been widespread discussion in the community about non-uniform
> reductions and by now, the problem is well known in the public domain.
> Establishing the role of the current submission in the discussion is
> very difficult, something not made easier by the context of an
> anonymous submission and the fast pace of the ongoing discussion.
>
> However, even granted the paper indeed deserves credit for recognizing
> the problem, I strongly disagree with its overly dramatic tone. There
> is no "error" in the security metric, as repeatedly claimed in the
> paper. Or even worse, as stated at the very beginning of the abstract,
> there is no flaw in the "standard security definitions used in the
> literature on provable concrete security." A metric is a well-defined
> number, and the only mistake one can do is not in the metric itself,
> but just misunderstanding it and plugging in the wrong numbers, which
> is all that happened. Or the metric may not capture what we want, and
> that is up for debate. Similarly, the security definitions are
> formally completely correct. Why not just say that, and avoid claiming
> a fundamental problem with provable security which may in fact is
> nowhere as serious as this paper wants us to believe?
>
> At this point, let me stress that the paper is a bit confusing as to
> what its main message is: Is the paper simply trying to tell us that
> the wrong numbers have been used or is it also suggesting that we
> should change the metric to encompass the time to find an attack also?
> The extensive use of words such as "flaw" and "error" suggests the
> latter.
>
> But then, it is fair to say that discrediting the usual approach of
> making non-constructive assumptions would also require some more care:
> Are we really sure that only considering attacks which can be *found*
> as relevant is the right way to go? Certainly, that is how we justify
> keyless hash functions as being appropriate for use in the real world,
> but there, we have no other choice. With other cryptographic
> primitives, such as a block cipher, it is really only a matter of
> worse parameters. Why not just take the best attack (whether we can
> run it or not) to assess parameters in a more conservative way? As an
> example, say I have an attack which depends on a string of length 128
> bits, which we do not know explicitly (such attacks against AES can be
> found, even in an easier way than what described in the paper; see
> below). Of course, it would seem it takes time O(2^{128}) to find such
> an attack, which is only slightly worse than the best constructive
> attack against AES. However, note the following: For the
> non-constructive attack, once the 128-bit string is found, then *any*
> instance of AES is broken -- a massive advantage over the constructive
> attack. I would expect the authors to further explain why such attacks
> are still not so relevant and should not be captured.
>
> Note that in the above discussion, by the way, I am explicitly
> avoiding the use of "non-uniform" vs "uniform", as this naming is
> somewhat incorrect. We are here talking about concrete security,
> without asymptotics, whereas uniform and non-uniform are inherently
> asymptotic terms. A security proof being "constructive" or "explicit"
> (as in Phil Rogaway's work on "Formalizing Human Ignorance") is the
> right term sought for. In principle, a *uniform* reduction or
> adversary may well be non-constructive! For example, say we need to
> instantiate a block cipher with key length 128 bits. Then, take any
> uniform attack against a family of block ciphers, and modify it so
> that for key length 128 it behaves as the best non-constructive attack
> against the block cipher for that key length by hard coding some
> advice in the algorithm. The attack remains *uniform*, but its
> concrete security remains the same for 128 bits. The same argument
> holds for any fixed key length, and of course, can be extended to
> reductions rather than adversaries.
>
> As for the aforementioned attacks (which are the main technical
> contribution of this paper), the paper presents attacks against PRP
> security of block ciphers, discrete logarithms, DSA, and RSA which are
> non-constructive and outperform the best constructive attacks. A
> complexity analysis is present in the different complexity metrics.
> While some attacks are definitely interesting, their presentation is
> way too compact and very hard to read. It is difficult to filter out
> which attacks have new ideas and which ones do not. Also, attacks are
> presented as against specific primitives / parameters, but they are
> actually almost always fully generic (as the authors themselves
> admit). Why not present them like that? In addition, I am fairly
> confused by the presented attacks against AES, as their analysis seems
> to rely on heuristic arguments, such as random oracles. Aren't they
> all outperformed by the very simple attacks presented by De,
> Trevisan, and Tulsiani (CRYPTO 2010)? The latter paper is only cited,
> but a more in-depth comparison would be appropriate.
>
> In short: The paper certainly brings up valid points to an ongoing
> fundamental discussion in our research community, and deserves credit
> for that. However, in my opinion, the paper is unfortunately too far
> from being a precise and fair explanation of the problem to be
> accepted at a major conference as the representative paper for this
> ongoing discussion.
Our comments:
> A recent paper by Koblitz and Menezes (KM) has sparked extensive
> discussion on the role of non-uniformity in concrete security. This
> paper attempts to add some technical elements to this ongoing
> discussion.
"Attempts"? This sounds like the attempt was a failure. How about
reporting what the paper actually accomplishes?
> In all fairness, the main issue is rather simple to explain:
"In all fairness"? This sounds like it's contradicting something. Is the
reviewer trying to say that the paper doesn't explain the main issue?
> Some reductions in security proofs are non-constructive
> (something which, unfortunately, is not always made obvious), yet when
> concrete parameters are derived to assess the insecurity of a
> provable-secure scheme, authors tend to plug in numbers referring to
> the best *explicit* (i.e. constructive) cryptanalytic attack.
It's not just the _proofs_ being non-constructive. Non-constructivity
is built into the _theorem statements_ and into the _conjectures_ and
into the underlying _definitions_. Nowhere does the reviewer mention
that we disprove the conjectures, and nowhere does the reviewer mention
that we explain how to fix the definitions.
Is the reviewer actually responding to a recent Koblitz--Menezes paper
instead of this paper? The emphasis on proofs seems consistent with
this.
> The
> problem is that one can prove non-constructively the existence of
> cryptanalytic attacks which outperform the best explicit cryptanalytic
> attacks, and there is a common understand among cryptanalysts that
> attacks which cannot be found can be safely ignored. Ideally, one
> should measure the complexity of attacks in terms of the complexity of
> running them *and* of finding the attack itself.
The reviewer obviously understands one big part of what we're saying,
but completely fails to credit it to us.
We agree with the reviewer that it's good to measure the complexity of
finding an attack---but where are the previous papers suggesting this,
and explaining _how_ to do it? The reviewer talks about this "ideal" as
if it were something well known, rather than something suggested and
accomplished for the first time in this paper.
> In short, the contributions of this paper can be summarized as first,
> reiterating this story with some further detail,
"Reiterating"? Where was any of this published before? As Wikipedia
would say, "citation needed".
> and second,
> presenting some concrete examples of non-constructive attacks which
> outperform the best explicit cryptanalytic attacks.
The reviewer is again wildly understating our contributions---trying
to claim that the general gap was understood before and that we're
merely "presenting some concrete examples". If it was understood before,
why do people keep making conjectures that are false _exactly_ because
of this gap?
> Before I turn to discussing the different attacks presented in the
> paper, let me first address some more high level points about this
> work. First off, I must admit being unsure whether this paper deserves
> credit for identifying the problem: Following KM's paper, there has
> been widespread discussion in the community about non-uniform
> reductions and by now, the problem is well known in the public domain.
What, precisely, is "the problem" that the reviewer refers to, and where
is the evidence that "the problem" was understood before our paper?
Koblitz and Menezes questioned Bellare's use of non-uniformity in one
_proof_ but all available evidence is that their question was dismissed
by the community as pure ignorance. Lindell wrote "This time they really
outdid themselves since there is actually no error. Rather the proof of
security is in the non-uniform model, which they appear to not be
familiar with." Katz wrote "There was no mistake in the original proof
at all! Rather, it was a misunderstanding of the security model being
used."
What actually shows the trouble is showing that non-uniform _attacks_
disprove standard _conjectures_ using standard _definitions_ and
undermine standard _theorem statements_. All of this appeared first in
our paper, except that (as we say in the paper) the low-probability case
of the PRF trouble was pointed out independently by Koblitz and Menezes.
We deserve credit for ideas that we were the first to publish. This
"public domain" argument is idiotic; if someone else was first, fine,
but the reviewer can't claim this without pointing to that person. We're
adding an explicit discussion of priority dates.
> Establishing the role of the current submission in the discussion is
> very difficult, something not made easier by the context of an
> anonymous submission and the fast pace of the ongoing discussion.
All available evidence is that the "ongoing discussion" was nothing more
than dismissal of what Koblitz and Menezes were saying---until our paper
made the underlying problem clear by disproving the standard security
conjectures. Example: Katz, referring to an early rump-session
presentation of our results, publicly wrote "I think [this] has a valid
point (that was not articulated, at least not explicitly, in the
original Koblitz-Menezes paper)."
Furthermore, the standard in science is to define human knowledge as the
totality of what is PUBLISHED, and to credit each extension of human
knowledge to the first PUBLICATION---even if it's possible to prove
after the fact that someone else privately figured out the same thing
earlier.
> However, even granted the paper indeed deserves credit for recognizing
> the problem, I strongly disagree with its overly dramatic tone. There
> is no "error" in the security metric, as repeatedly claimed in the
> paper. Or even worse, as stated at the very beginning of the abstract,
> there is no flaw in the "standard security definitions used in the
> literature on provable concrete security." A metric is a well-defined
> number, and the only mistake one can do is not in the metric itself,
> but just misunderstanding it and plugging in the wrong numbers, which
> is all that happened.
This is the nitwit formalist response: any clear definition is just
fine, never mind how well it matches the reality that it's supposed to
be modeling.
For comparison, the original 1994 Bellare--Kilian--Rogaway security
metric sounded reasonable at first but turned out to be ludicrously
inaccurate: it allowed any K-bit cipher to be broken in time essentially
K. Once this "pathology" was pointed out, the definition was changed.
But all this reviewer cares about is that the security metric is
"well-defined"; the reviewer says there can't be a "flaw" in anything
that's well-defined. Ridiculous.
> Or the metric may not capture what we want, and
> that is up for debate.
All of our high-probability attacks are obviously infeasible but are
incorrectly declared by the standard definitions to be feasible. That's
a clear error in modeling. Even worse, people have been fooled again and
again by exactly this error, making conjectures based directly on the
incorrect belief that the standard definition of feasibility matches
reality, and we've now shown that all of those conjectures are false.
When there's a clear alignment between what the attackers can actually
do, what the cryptanalysts are looking at, what the conjectures are
saying, and what the theorems are _supposed_ to be talking about, and
the only thing out of whack with the picture is the definition, is this
reviewer really going to try to argue that the definition is just fine
and that everybody else is looking at the wrong thing? Does the reviewer
also think that it's "up for debate" that there was a flaw in the 1994
Bellare--Kilian--Rogaway security definitions?
> Similarly, the security definitions are
> formally completely correct. Why not just say that, and avoid claiming
> a fundamental problem with provable security which may in fact is
> nowhere as serious as this paper wants us to believe?
See above.
> At this point, let me stress that the paper is a bit confusing as to
> what its main message is: Is the paper simply trying to tell us that
> the wrong numbers have been used or is it also suggesting that we
> should change the metric to encompass the time to find an attack also?
> The extensive use of words such as "flaw" and "error" suggests the
> latter.
The paper very clearly answers this question.
> But then, it is fair to say that discrediting the usual approach of
> making non-constructive assumptions would also require some more care:
> Are we really sure that only considering attacks which can be *found*
> as relevant is the right way to go? Certainly, that is how we justify
> keyless hash functions as being appropriate for use in the real world,
> but there, we have no other choice. With other cryptographic
> primitives, such as a block cipher, it is really only a matter of
> worse parameters. Why not just take the best attack (whether we can
> run it or not) to assess parameters in a more conservative way?
We're adding some discussion of this "more conservative" fallacy: in
particular, an explanation of how underestimating security can cause
users to _lose_ security.
> As an
> example, say I have an attack which depends on a string of length 128
> bits, which we do not know explicitly (such attacks against AES can be
> found, even in an easier way than what described in the paper; see
> below). Of course, it would seem it takes time O(2^{128}) to find such
> an attack, which is only slightly worse than the best constructive
> attack against AES.
No such attacks are known, and even if they were known they wouldn't
contradict anything we're saying.
> However, note the following: For the
> non-constructive attack, once the 128-bit string is found, then *any*
> instance of AES is broken -- a massive advantage over the constructive
> attack. I would expect the authors to further explain why such attacks
> are still not so relevant and should not be captured.
This fallacy was already addressed in Section B.1. We're now splitting
it off into B.1.1, illustrating it with an example, and citing it in the
FAQ list.
> Note that in the above discussion, by the way, I am explicitly
> avoiding the use of "non-uniform" vs "uniform", as this naming is
> somewhat incorrect. We are here talking about concrete security,
> without asymptotics, whereas uniform and non-uniform are inherently
> asymptotic terms. A security proof being "constructive" or "explicit"
> (as in Phil Rogaway's work on "Formalizing Human Ignorance") is the
> right term sought for. In principle, a *uniform* reduction or
> adversary may well be non-constructive! For example, say we need to
> instantiate a block cipher with key length 128 bits. Then, take any
> uniform attack against a family of block ciphers, and modify it so
> that for key length 128 it behaves as the best non-constructive attack
> against the block cipher for that key length by hard coding some
> advice in the algorithm. The attack remains *uniform*, but its
> concrete security remains the same for 128 bits. The same argument
> holds for any fixed key length, and of course, can be extended to
> reductions rather than adversaries.
The terminological ignorance displayed here is easy to dispel, and we're
now addressing it in the FAQ.
> As for the aforementioned attacks (which are the main technical
> contribution of this paper), the paper presents attacks against PRP
> security of block ciphers, discrete logarithms, DSA, and RSA which are
> non-constructive and outperform the best constructive attacks. A
> complexity analysis is present in the different complexity metrics.
> While some attacks are definitely interesting, their presentation is
> way too compact and very hard to read. It is difficult to filter out
> which attacks have new ideas and which ones do not.
The paper presents several quite different attacks, covering all of the
most common cryptographic primitives used in practice. Given the space
restrictions some brevity is unavoidable, but at the same time we are
not aware of anything unclear.
As for novelty, we explicitly give credits for various ideas and claim
credit for various ideas. Historical discussions occupy a considerable
fraction of the text.
> Also, attacks are
> presented as against specific primitives / parameters, but they are
> actually almost always fully generic (as the authors themselves
> admit). Why not present them like that?
Here's how Section 2 now answers this: "All of the attacks readily
generalize to other block ciphers; none of the attacks exploit any
particular weakness of AES. We focus on AES because of its relevance in
practice and to have concrete numbers to illustrate the attacks."
Concreteness aids clarity.
> In addition, I am fairly
> confused by the presented attacks against AES, as their analysis seems
> to rely on heuristic arguments, such as random oracles. Aren't they
> all outperformed by the very simple attacks presented by De,
> Trevisan, and Tulsiani (CRYPTO 2010)? The latter paper is only cited,
> but a more in-depth comparison would be appropriate.
No, they aren't outperformed by those attacks. We're now covering this
in the FAQ.
> In short: The paper certainly brings up valid points to an ongoing
> fundamental discussion in our research community, and deserves credit
> for that. However, in my opinion, the paper is unfortunately too far
> from being a precise and fair explanation of the problem to be
> accepted at a major conference as the representative paper for this
> ongoing discussion.
We're aware of an American news station that advertises itself as "fair
and balanced", but this is not common language for scientific reviews.
Scientific papers are supposed to be evaluated on the basis of what they
contribute to human knowledge.
========================================================================
Asiacrypt 2012 review:
> In this paper the authors reconsider the question of uniform vs. non-uniform
> algorithms for attacking cryptographic schemes. (Actually, that is not quite
> right because the paper explicitly deals with a *concrete* rather than
> *asymptotic* setting. Nevertheless, the terminology is useful.) Basically the
> observation is that in many settings there is a big difference between the
> best algorithm that *exists* and the best algorithm that we *can find*. This
> is best illustrated by the case of keyless hash functions, where there clearly
> exists an O(1)-time attack but, for SHA-256 at least, we do not know how to
> find such an attack. In several other cases (including PRP distinguishing
> attacks, factoring, and computing discrete logarithms) the paper shows that
> "fast" attacks exist if unbounded precomputation is allowed.
>
> The broader point of the paper is that the community should reconsider
> definitions of security in the concrete setting. The paper makes a case that
> the "AT metric" (which roughly seems to correspond to the product of space and
> time complexity of standard algorithms) is the "right" metric to use.
>
> I find the paper interesting, but I am not sure whether it should be published
> at Asiacrypt. The technical results are, for the most part, not new. (The
> authors acknowledge as much.) What is left is a position paper which seems
> perfect for eprint or for a discussion at a workshop, but somehow not the best
> fit for Asiacrypt. I also don't fully agree with the authors' conclusions. For
> starters, the AT metric does not fully solve the problem (unkeyed hash
> functions are still problematic, as are the PRP attacks in Section 3.1). In
> addition, I'm not sure what would change if the community moved to the AT
> metric. Is the authors' suggestion to re-do all the proofs in the literature
> using the AT metric?
>
> Other:
> I was thrown by the discussion of tightness of security proofs on pages 1-3.
> What is the connection between that and the rest of the paper? On page 3 you
> write "The error in the standard insecurity metric has a troubling
> consequence..." but I don't see why: if P_1 has a tight reduction to some
> assumption A, and P_2 has a non-tight reduction to A, then P_1 is indeed
> "safer" than P_2 regardless of whether or not our initial assumption about the
> hardness of A is correct.
>
> Section 4.2: why does the time required for the precomputation even matter?
Our comments:
> In this paper the authors reconsider the question of uniform vs. non-uniform
> algorithms for attacking cryptographic schemes. (Actually, that is not quite
> right because the paper explicitly deals with a *concrete* rather than
> *asymptotic* setting. Nevertheless, the terminology is useful.)
The attacks are concrete, non-uniform, and (mostly) non-constructive.
These are not equivalent concepts.
> Basically the
> observation is that in many settings there is a big difference between the
> best algorithm that *exists* and the best algorithm that we *can find*. This
> is best illustrated by the case of keyless hash functions, where there clearly
> exists an O(1)-time attack but, for SHA-256 at least, we do not know how to
> find such an attack. In several other cases (including PRP distinguishing
> attacks, factoring, and computing discrete logarithms) the paper shows that
> "fast" attacks exist if unbounded precomputation is allowed.
This was previously thought to be specific to the hash-function case.
> The broader point of the paper is that the community should reconsider
> definitions of security in the concrete setting. The paper makes a case that
> the "AT metric" (which roughly seems to correspond to the product of space and
> time complexity of standard algorithms) is the "right" metric to use.
Quote falsification alert: the paper says some positive things about the
AT metric but never says that the metric is "right". The paper quite
clearly recommends a further change, namely adding constructivity.
As for the reviewer's "correspond", there's an important difference:
"standard" algorithms don't allow parallelism, while the AT metric does.
> I find the paper interesting, but I am not sure whether it should be published
> at Asiacrypt. The technical results are, for the most part, not new. (The
> authors acknowledge as much.)
On the contrary! There are many new technical results here. Anyone who
reads the attack descriptions can see many points clearly identified as
new. We're expanding the discussion of this in the introduction.
> What is left is a position paper which seems
> perfect for eprint or for a discussion at a workshop, but somehow not the best
> fit for Asiacrypt.
Disproving conjectures from the literature, and analyzing ways to fix
the conjectures (by fixing the definitions), is a "position paper"?
> I also don't fully agree with the authors' conclusions. For
> starters, the AT metric does not fully solve the problem (unkeyed hash
> functions are still problematic, as are the PRP attacks in Section 3.1).
We never claim that the AT metric fully fixes the problem; in fact, we
clearly state the opposite. We're adding a FAQ entry that might help
sloppy readers find the relevant part of the paper.
> In
> addition, I'm not sure what would change if the community moved to the AT
> metric. Is the authors' suggestion to re-do all the proofs in the literature
> using the AT metric?
Yes. Did the reviewer read the paper? We're adding a FAQ entry.
> Other:
> I was thrown by the discussion of tightness of security proofs on pages 1-3.
> What is the connection between that and the rest of the paper? On page 3 you
> write "The error in the standard insecurity metric has a troubling
> consequence..." but I don't see why: if P_1 has a tight reduction to some
> assumption A, and P_2 has a non-tight reduction to A, then P_1 is indeed
> "safer" than P_2 regardless of whether or not our initial assumption about the
> hardness of A is correct.
If the tight reduction is something non-uniform and non-constructive
such as Bellare's HMAC reduction---which is allowed by the definitions,
and by the theorem statements---then it becomes vacuous at the attack
costs that we describe, such as 2^85 for NIST P-256. If the non-tight
reduction is constructive then it's starting from a much higher security
level, such as 2^128 for NIST P-256; if the tightness loss is (say) 2^20
then the guarantee for P_2 is much stronger than the guarantee for P_1,
contrary to this reviewer's conclusion.
We're redoing the introduction to focus on absolute security of a single
scheme; hopefully this is easier to understand than relative security of
two schemes.
> Section 4.2: why does the time required for the precomputation even matter?
Because it shows how infeasible the computation is---something not
captured by the standard definitions but captured by one of our
recommended fixes. We're adding a FAQ entry to cover this.
========================================================================