Keywords

1 Introduction

1.1 Background

Motivated by the fact that an adversary may maliciously modify the secret information of a cryptographic scheme by executing tampering attacks (e.g., heating up devices or injecting faults [12, 13]) and observe the effect of the changes, Bellare and Kohno [10] and Gennaro et al. [37] independently initiated the study on tamper-resilient primitives. Bellare and Kohno proposed block-cipher against restricted tampering attacks (i.e., the class of tampering functions used by the adversary is restrictedFootnote 1), and gave a negative result showing that there exists no tamper-resilient block-cipher against arbitrary tampering functions. In their model (called the plain model in our paper), secret keys are potentially tampered with and there is no trusted setup. Gennaro et al. treated primitives against arbitrary tampering functions, whereas secret keys implicitly contain trusted public keys in their model (called the on-line model in [35]). Although most following works [7, 8, 11, 17, 23, 24, 28, 34, 35, 49, 53, 62, 67] did not adopt the on-line model, they assumed the existence of trusted common reference strings (CRSs), due to the difficulty of achieving tamper-resilience in the plain model. Amongst them, several works [17, 23, 24, 28, 35] are secure even when tampering functions could be arbitrary. Such strong security notions are worth considering since it is hard to restrict the range of attacks in practice. However, it is not desirable to put a strong trust on the entity that sets up public parameters in practice, and the assumption that tamper-proof public parameters are available is very strong. Especially, an adversary, who can execute fault attacks on secret keys, should also be able to alter CRSs stored together with the keys in the device. Therefore, it is desirable to understand what kind of primitives can be tamper-resilient in the plain model. The research by Dziembowski et al. [27] showed us a promising way to achieve tamper-resilience in the plain model, which utilizes non-malleable codes. Indeed, combining non-malleable codes with standard primitives can straightforwardly derive primitives (even with uniqueness properties) secure against (at least one-time) tampering attacks in the plain model. However, this work requires restrictions on tampering functions (and so do the following works (e.g., [6, 18, 22, 29, 30, 46])). Another work by Ateniese et al. [4] proved that unique signatures are secure against subversion attacks, which allow an adversary to maliciously modify signing algorithms and hence capture tampering attacks. However, their results assume that the attacks meet an undetectability property or cryptographic reverse firewalls are available.

Up until now, all the (positive or negative) results on tamper-resilient primitives against arbitrary tampering functions either assumed tamper-proof public parameters (e.g., [23, 24, 28, 35, 49]), or focused on symmetric cryptography [10]. Hence, it remains unclear whether public key primitives can achieve (full) tamper-resilience in the plain model. In this paper, we study public key cryptography in this model and show broad negative results.

1.2 Our Results

We focus on the impossibility of proving the tamper-resilience of a wide class of cryptographic primitives via black-box reductions, and show several negative results. The type of black-box reduction we consider is the so-called fully black-box reduction, which does not use the code of adversaries. Moreover, a reduction algorithm should break the underlying assumption as long as the utilized adversary successfully breaks the tamper-resilience, no matter how much computing power the adversary has. We remark that most cryptographic primitives are proved to be secure via such type of reductions. We detail our results below.

Impossibility on Provable Deterministic Primitives (PDPs) and Signatures. At first, we consider a negative result on a class of primitives called PDPs, which were firstly defined by Abe et al. [2]. A PDP evaluates a function by using a secret key and generates a proof for the input/output pair, and it is required that there exist only one valid output for each input. The definition of PDPs captures various primitives such as verifiable random functions (VRFs) [44, 54, 55], verifiable unpredictable functions (VUFs) [54, 55], and unique signatures [39, 54, 55]. The (perfect) uniqueness property of these primitives prevents malicious signers from easily outputting many signatures on the same input, and thus prevents a simple denial-of service attack on a verifier forced to verify many outputs on the same input, even when the key pairs are selected by the signer in a subtle way. Also, due to this property, PDPs can be viewed as perfectly binding commitments to an exponential number of (perhaps random-looking) outputs [54], and thus can play important roles in micropayments [57], resettable zero-knowledge proofs [56], updatable zero-knowledge databases [52], verifiable transaction escrow schemes [47], etc. On account of the wide usage of PDPs, their security under tampering attacks is important and worth studying.

For these primitives, we show that it is impossible to achieve their tamper-resilience via black-box reductions. More specifically, we show that if a PDP is weakly unpredictable,Footnote 2 then there exists no black-box reduction deriving its tamper-resilience from any assumption captured by the notion of a cryptographic game [38, 40, 68], where a (possibly inefficient) challenger interacts with a monolithic adversary.Footnote 3 Here, weak unpredictability only requires that any probabilistic polynomial-time (PPT) adversary, neither allowed to make a query nor given a public key, cannot come up with a valid input/output pair. It is clear that any non-trivial PDP should satisfy such weak security. Furthermore, differently from negative results in [35, 37], we treat very weak tamper-resilience, where an adversary only makes one tampering query and one computing query.Footnote 4 Hence, our result also captures tamper-resilient PDPs with self-destructive or key-updating mechanism.Footnote 5 We prove our result by using the simulatable attack paradigm proposed by Wichs [68], which is discussed in more details in the next subsection. By slightly modifying this proof, we can extend our negative result for a more general notion called re-randomizable signatures [43, 66].

As by-product results, we prove the same negative result on weakly unforgeable unique-key signatures, in which there exists only one valid secret key for each public key, and injective one-way functions (OWFs). Here, weak unforgeability is defined in the same way as weak unpredictability, and up until now, a broad class of existing signature schemes, e.g., ones where secret keys are discrete logarithms or factoring of public keys, or key pairs are in the Diffie-Hellman form, is captured by the notion of unique-key signatures (e.g., [21, 36, 45, 51, 66]). These results not only show the reason why many existing schemes cannot be proven tamper-resilient, but also indicate that when constructing signatures and OWFs in the presence of tampering attacks, one should circumvent the unique-key property and injectiveness. Note that there is no contradiction between our work and tamper-resilient unique signatures (implicitly) implied by previous results on non-malleable codes or subversion-resilient signatures [4, 6, 18, 27, 29, 30, 46], since those results require either restrictions on tampering functions or reverse firewalls as mentioned before.

Impossibility on Encryption Schemes. Next, we give a negative result on a class of public key encryption (PKE) schemes that we call unique-message PKE schemes, where for a ciphertext (possibly outside the support of the encryption algorithm), all the valid secret keys with respect to a public key (possibly outside the support of the key generation algorithm) lead to the same decryption result. More specifically, for a unique-message PKE scheme, in addition to ordinary algorithms as a PKE scheme, we require that there be an algorithm that we call the “plaintext-recovering” algorithm. Its syntax is exactly the same as the ordinary decryption algorithm. We require that it satisfy the usual correctness as the decryption algorithm. What makes it different from the ordinary decryption algorithm is that for each public key and each ciphertext, it holds that the decryption results are the same for all valid secret keys with respect to the public key (see Definition 15 for the formal definition). We note that the plaintext-recovering algorithm may be the original decryption algorithm, but in general it need not be so.

Our negative result shows that if a unique-message PKE scheme is weakly one-way, then there exists no black-box reduction deriving its tamper-resilience from any assumption captured by the notion of a restricted cryptographic game (i.e., any common falsifiable assumption [38, 59]), where the challenger is restricted to be PPT. Here, weak one-wayness is defined in the same way as standard one-wayness, except that we only treat adversaries that are not allowed to see the public key. In other words, it only requires that any PPT adversary, neither allowed to make decryption queries nor given a public key, cannot recover the message from a randomly generated ciphertext. This is clearly a very weak security notion, and should be satisfied by any non-trivial PKE scheme. Furthermore, similarly to the cases of PDPs and signatures, we consider very weak tamper-resilience, where an adversary only makes one tampering query and one decryption query. Unlike our result for PDPs, this result does not capture black-box reductions to non-falsifiable assumptions where challengers are computationally unbounded, unless we assume that the PKE scheme is statistically weakly one-way. However, statistical weak one-wayness is not necessarily implied by (computational) one-wayness. We will give more details in Sect. 4.2. Here, notice that [23] also shows a negative result with respect to adversaries making a single tampering and decryption query. However, it only treats a strong type of tampering queries called “post-challenge” tampering queries, which can be dependent on the challenge ciphertext. Their negative result can be circumvented when considering challenge-independent tampering queries which capture a-prior tampering attacks in practice. Moreover, their negative result holds only for indistinguishability against chosen ciphertext attacks. It was unclear whether tamper resilient one-wayness against (even weak) chosen ciphertext attacks is achievable, while we give a partial but strong negative answer.

Although the definition of a unique-message PKE scheme has never been formalized before, it captures many naturally constructed PKE schemes (e.g., [15, 20, 36, 41, 64]). More specifically, since a PKE scheme is required to satisfy correctness, all the valid ciphertexts (i.e., ones in the support of the encryption algorithm) should be decrypted to a unique message, while invalid ciphertexts usually lead the decryption algorithm to abort, in order to prevent adversaries from learning useful information from the answers of decryption queries. Hence, a ciphertext is typically decrypted to a unique message if the used secret key is correct. However, due to possible difficulties of directly proving some implementations to be a unique-message PKE scheme, instead of insisting that the original decryption algorithm decrypt any ciphertext to a unique message, we give a relaxation by introducing the notion of the plaintext-recovering algorithm mentioned in the above paragraph. For ease of understanding the rationale behind our definition, as an instance, we show how it captures the Cramer-Shoup scheme [20] in the full paper. Furthermore, one can also see that unique-key PKE schemes, where there exists only one valid secret key for each public key, can be cast as unique-message PKE schemes (see the full paper for details). Similar to our negative results for signatures, our results for PKE schemes clearly show the exact barrier when proving tamper-resilience for existing schemes, and also give a guideline for future works on tamper-resilient PKE schemes.

Remark on the Plain Model. Our research focuses on tamper-resilient primitives in the plain model. This model might seem strong and assuming the existence of tamper-proof public parameters may be reasonable to some extent. However, such an assumption is not always realistic since fault attacks allow adversaries to maliciously modify CRSs stored in the devices. Besides, another line of works on subversion-resilient non-interactive proof systems (e.g., [1, 9, 33]) also alerted the danger of trusting CRSs. Therefore, in addition to previous positive results mentioned before [4, 6, 18, 27, 29, 30, 46], it is desirable to deepen the understanding of tamper-resilient primitives in the plain model. Moreover, we believe that trying to circumvent our negative results is a good starting point for future positive works, as we will mention in the open problems in Sect. 5.

1.3 High-Level Idea and Technique

Tamper-resilient primitives are secure against adversaries that try to tamper with the secret key, make (computing, signing, or decryption) queries, and output a forgery or recover a message. To prove the tamper-resilience of a primitive under some cryptographic assumption via a (fully) black-box reduction, one needs to construct a reduction algorithm that has access to any successful adversary against the tamper-resilience and breaks the assumption. Therefore, to show the impossibility on black-box reductions from tamper-resilience to common assumptions, what we need to do is to rule out the existence of reduction algorithms which can obtain useful information from any successful adversary, in any cryptographic game. Our basic idea for proving this is to show that any reduction algorithm cannot answer the queries made by some successful adversary, or it cannot benefit from the outputs of the adversary.

Tamper-Resilience Model. Before describing how we achieve our goal, we describe how we model a valid adversary against tamper-resilience in more details. Such an adversary consists of three independent components \((\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\), which are allowed to share states before the security game but do not have communicating tapes once the game begins.Footnote 6 \(\mathbf {Tamper}\) models a tampering function that on input the original secret key tries to output some tampered key helpful to \((\mathbf {Break}1,\mathbf {Break}2)\). \(\mathbf {Break}1\) makes a (computing, signing, or decryption) query. On input the answer generated via the tampered key, \(\mathbf {Break}2\) tries to output some forgery (or recover a message from the challenge ciphertext). The model we consider captures the very weak type of tampering attacks where the adversary can only make one selectively determined tampering query, which in turn makes our negative results very strong. Notice that \((\mathbf {Break}1,\mathbf {Break}2)\) learns no information on the secret key other than the information leaked from the answer of the, say, computing query, since \((\mathbf {Break}1,\mathbf {Break}2)\) and \(\mathbf {Tamper}\) (which sees the secret key) do not share any state generated during the security game. It is the same as the case that an adversary \((\mathbf {Break}1,\mathbf {Break}2)\) determines the way to tamper with the secret key at the beginning of the game and tries to benefit from the answer of the computing query. We justify our model as follows.

Remark on Our Model. In our model, we define the tampering function as part of the adversary and the reduction can only have black-box access to it, while in previous works [23, 24, 28, 35] considering arbitrary tamper attacks, the tampering functions are defined as tampering queries made by the adversary. Therefore, one may wonder whether the reductions considered by us are more restricted. We stress that in the security proofs of all these works, the reductions also access the functions in a black-box manner. Indeed, those reductions sometimes change the functions by hard-wiring other functions or values in the tampering functions, which make the functions seem “non-black-box”. However, such procedures can be treated as changing the input of the functions rather than exploiting the structure of the functions themselves. Therefore, such modification for tampering functions is also allowed in our model. For instance, the reduction can query the hard-wired function to its challenger and give the answer back to \(\mathbf {Tamper}\) as its input. As a result, our model does not make any additional restrictions on the reduction.

We also stress that we can model an adversary in the way that \((\mathbf {Break}1,\mathbf {Break}2)\) sends a (selectively determined) tampering function \(\mathbf {Tamper}\) to modify a secret key sk to a tampered key \(\mathbf {Tamper}(sk)\) as in previous works (e.g., [23, 24, 28, 35]), while for a fully black-box reduction, which only has black-box access to \(\mathbf {Tamper}\), this does not make any difference. We define \(\mathbf {Tamper}\) as part of the adversary only for simplicity. Our model does not rule out reductions exploiting the structures of arbitrary tampering functions (rather than restricted ones), while it would be surprising if there would be any.

Our Intuition. We now describe an intuition behind our proofs. For ease of understanding, we talk about the case of unique signatures. Let \(\mathcal {R}\) be a reduction algorithm trying to attack some underlying assumption captured by a typical cryptographic game with access to a successful adversary \(\mathcal {A}=(\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\). To benefit from \(\mathcal {A}\), \(\mathcal {R}\) has to answer the signing query made by \(\mathbf {Break}1\). However, we observe that if \(\mathcal {R}\) has not given \(\mathbf {Tamper}\) a valid secret key previously,Footnote 7 it may have no idea what the tampered key is, in which case answering the signing query is as difficult as breaking the underlying security of the unique signature scheme for \(\mathcal {R}\). On the other hand, if \(\mathcal {R}\) has given a valid key to \(\mathbf {Tamper}\), then it is able to forge a signature by itself. In this case, since a signature forged by \(\mathbf {Break}2\) must be the same as the one forged by \(\mathcal {R}\) due to uniqueness, it does little help to \(\mathcal {R}\). As a result, the access to \(\mathcal {A}\) may not benefit \(\mathcal {R}\) in the cryptographic game, which gives us the conflict. Note that when formally showing the existence of such a successful but “useless” adversary \(\mathcal {A}\), we need to take care of more details (e.g., the way for \(\mathcal {A}\) to check the validity of a secret key without having a key checking algorithm), which we do not mention here for simplicity.

The above intuition is also adopted to show negative results on VRFs, VUFs, unique-key signatures, injective OWFs, and unique-message PKE schemes, while in the case of unique-message PKE schemes, our proof exploits the difficulty of answering decryption queries (instead of signing queries). Like the case of unique signatures, the uniqueness properties of these primitives ensure that the outputs of \(\mathcal {A}\) and \(\mathcal {R}\) are identical.

We formalize our intuition by using meta-reductions, which have appeared in a great deal of previous works (e.g., [3, 5, 14, 19, 22, 31, 32, 38, 42, 50, 58, 60, 61, 68, 69]). Roughly speaking, we firstly give an inefficient (but valid) adversary \(\mathcal {A}\) breaking the tamper-resilience of a class of schemes, and then a PPT algorithm \(\mathbf {Sim}\) (which does not necessarily have the structure of a valid adversary) simulating the action of \(\mathcal {A}\). For any black-box reduction \(\mathcal {R}\) deriving the tamper-resilience of these schemes from some cryptographic game \(\mathcal G\), we could obtain a PPT adversary \(\mathcal {R}^\mathbf {Sim}\) breaking \(\mathcal G\) with the same advantage as \(\mathcal {R}^\mathcal {A}\), which is infeasible if \(\mathcal G\) is hard to break. Hence, we can derive the non-existence of \(\mathcal {R}\). More specifically, we give our proof under the simulatable attack paradigm, a meta-reduction method proposed by Wichs [68]. Most negative results proved using meta-reductions implicitly fall under this paradigm, and recent works [16, 26] explicitly used this paradigm to show negative results on entropic search learning with errors and extractors for entropy extractor-dependent sources. What we have to do is to show that any (possibly inefficient) oracle-access machine cannot distinguish \(\mathcal {A}\) and \(\mathbf {Sim}\). By doing this we can separate the tamper-resilience of our target schemes from any cryptographic game \(\mathcal G\) rather than some particular one. We remark that when proving the impossibility result on unique-message PKE schemes, the oracle-access machine is required to be efficient, in which case we separate the tamper-resilience from any restricted cryptographic game. We refer the reader to Sects. 3.3 and 4.3 for the details of our adversaries and simulators and the ideas behind their constructions.

Comparison with Previous Works. Finally, let us highlight the differences between the work by Wichs and ours. In [68], Wichs showed that there is no black box reduction for proving the security of leakage-resilient unique witness one-way relations, leaky pseudo-entropy generators, entropy condensers, and correlation-resilient OWFs from the assumptions captured by cryptographic games.Footnote 8 His negative results for the latter two primitives are vastly different stories, since the adversaries are modeled in different ways, while those for the former two are more related to our results, in the sense that both leakage-resilient and tamper-resilient primitives prevent attacks on memory. However, it should be noticed that leakage-resilience treats adversaries directly obtaining leaked information, while tamper-resilience treats ones observing the effect of malicious modifications. Since the restrictions on adversaries are completely different, our results and those by Wichs do not imply each other, and designing inefficient adversaries and PPT simulators in our case is never easy. Furthermore, our proofs utilize a new methodology that proves the indistinguishability between adversaries and simulators based on computational security assumptions. This is quite different from Wichs’ results since the proofs in his work do not use them.

Other Related Negative Results on Unique Primitives. Coron [19] showed impossibility on simple reductions deriving the tight security of unique sigantures from non-interactive assumptions. Later Kakvi and Kiltz [48] fixed Coron’s result by giving a stricter definition of unique signatures. Hofheinz et al. [43] extended the negative results in [19, 48] for the notion of re-randomizable signatures, which is more general and hence captures more instantiations, such as the Waters signature scheme [66] and its variant [43]. Recently, Morgan and Pass [58] proposed an impossibility result by ruling out any linear-preserving black-box reduction deriving the security of unique signatures from bounded-round assumptions. In another line, Wang et al. [65] ruled out memory-tight black-box reductions deriving the multi-challenge security of signatures from any computational assumption.

1.4 Outline of This Paper

In Sect. 2, we recall the definitions of cryptographic games (and properties) and simulatable attacks. In Sect. 3, we give our negative results on PDPs, unique-key signatures, and injective OWFs. In Sect. 4, we give our negative results on unique-message PKE schemes. In Sect. 5, we discuss open problems.

2 Preliminaries

In this section, we review several definitions and terminologies that are necessary to describe our results.

Notation. \({ negl}\) denotes an unspecified negligible function. If \(\mathcal {X}\) is a finite set, then \(x\leftarrow \mathcal{X}\) denotes the process of uniformly sampling x at random from the set \(\mathcal X\). If \(\mathcal {A}\) is a deterministic (respectively, probabilistic) algorithm, then \(y=\mathcal {A}(x)\) (respectively, \(y\leftarrow \mathcal {A}(x)\)) means that \(\mathcal {A}\) on input x outputs y. Letting the internal randomness space of a probabilistic algorithm \(\mathcal {A}\) be \(\mathcal {R}_a\), computing \(y\leftarrow \mathcal {A}(x)\) is equivalent to sampling \(r\leftarrow \mathcal {R}_a\) and then computing \(y=\mathcal {A}(x;r)\).

2.1 Cryptographic Game (Property)

In this subsection, we recall the definitions of a cryptographic game and a cryptographic property.

Definition 1

(Cryptographic game [40]). A cryptographic game \(\mathcal G\) consists of a (possibly inefficient) random system (called the challenger) \(\mathcal{CH}\) and a constant \(c\in [0,1)\). For some security parameter \(1^\lambda \), \(\mathcal{CH}(1^\lambda )\) interacts with some adversary \(\mathcal {A}(1^\lambda )\), and outputs a bit b. This interaction is denoted by \(b \leftarrow (\mathcal {A}(1^\lambda )\leftrightharpoons \mathcal{CH}(1^\lambda ))\), and the advantage of \(\mathcal {A}\) in \(\mathcal G\) is \(\mathbf {Adv}_\mathcal{G}^\mathcal {A}(\lambda )=\Pr [1\leftarrow (\mathcal {A}(1^\lambda )\leftrightharpoons \mathcal{CH}(1^\lambda ))]-c\).

A cryptographic game \(\mathcal G\) is secure if for any PPT adversary \(\mathcal {A}\), \(\mathbf {Adv}_\mathcal{G}^\mathcal {A}(\lambda )\le { negl}(\lambda )\).

As noted in [38, 68], all commonly used assumptions in cryptography fall under the framework of cryptographic games.

A restricted version of the above definition, which only considers PPT challengers and captures most falsifiable assumptions [38, 59], is given as follows.

Definition 2

(Restricted cryptographic game [38]). A restricted cryptographic game is defined in exactly the same way as a cryptographic game, except that we replace “(possibly inefficient) random system” with “PPT random system”.

Cryptographic Property. As noted by Wichs [68], although the definition of a cryptographic game captures all common assumptions, it does not capture some cryptographic properties against stateless adversaries consisting of multiple independent components (e.g., leakage-resilience of one-way relations defined in [68] and tamper-resilience of PDPs, signatures, and PKE schemes defined later in our paper). Following [68], we give a very general definition of an arbitrary cryptographic property \(\mathcal P\) and use \(\mathbf {Adv}_\mathcal{P}^{\mathcal {A}_\lambda }(\lambda )\) to denote the advantage of \(\mathcal {A}_\lambda \) in breaking a cryptographic property \(\mathcal P\), where \(\lambda \) is the security parameter. \(\mathcal P\) is said to be secure if for any PPT adversary \(\mathcal {A}\), \(\mathbf {Adv}_\mathcal{P}^{\mathcal {A}_\lambda }(\lambda )\) is negligible in \(\lambda \).

2.2 Simulatable Attack

The simulatable attack paradigm is a meta-reduction method formalized by Wichs [68]. Showing the existence of simulatable attacks on cryptographic properties is a general way to show the impossibility of deriving these properties from common assumptions via black-box reductions. We now recall the definitions of a black-box reduction and simulatable attack as follows. Our definition here is based on [68].

Definition 3

(Black-box reduction). Let \(\mathcal P\) be some cryptographic property and \(\mathcal G\) be some cryptographic game. An oracle-access PPT machine \(\mathcal {R}^{(\cdot )}\) is said to be a black-box reduction deriving the security of \(\mathcal P\) from \(\mathcal G\), if for any (possibly inefficient, non-uniform) adversary \(\mathcal {A}_\lambda \) such that \(\mathbf {Adv}_\mathcal{P}^{\mathcal {A}_\lambda }(\lambda )=1\), there exists a non-negligible function \(\epsilon \) such that we have \(\mathbf {Adv}_\mathcal{G}^{\mathcal {R}^{\mathcal {A}_\lambda }}(\lambda )\ge \epsilon (\lambda )\).

In the above definition, we require \(\mathbf {Adv}_\mathcal{P}^{\mathcal {A}_\lambda }(\lambda )\) to be 1 (rather than non-negligible), which makes the defined reduction very restrictive. Since our focus is on a black-box separation, the more restricted the type of black-box reductions is, the stronger our negative results become. Furthermore, the definition in [68] is strengthened by requiring that a black-box reduction \(\mathcal {R}\) have noticeable advantage in breaking \(\mathcal G\), while we only require \(\mathcal {R}\) to have a non-negligible advantage, which is more common.Footnote 9

A simulatable attack on a cryptographic property \(\mathcal P\) consists of a valid but possibly inefficient adversary \(\mathcal {A}\) and a possibly invalid but efficient simulator \(\mathbf {Sim}\).Footnote 10 If \(\mathcal {A}\) breaks \(\mathcal P\) and is indistinguishable from \(\mathbf {Sim}\) for any oracle-access machine, then the existence of a black-box reduction \(\mathcal {R}\) deriving the security of \(\mathcal P\) from some cryptographic game \(\mathcal G\) implies that breaking the assumption captured by \(\mathcal G\) is not hard. This follows from the fact that the success of \(\mathcal {R}^\mathcal {A}\) implies the success of \(\mathcal {R}^\mathbf {Sim}\) in \(\mathcal G\).

Definition 4

(Simulatable attack [68]). A simulatable attack on a cryptographic property \(\mathcal P\) consists of: (a) an ensemble of (possibly inefficient) stateless non-uniform adversaries \(\{\mathcal {A}_{\lambda ,f}\}_{\lambda \in \mathbb {N},f\in \mathbb {F}_\lambda }\) where \(\{\mathbb {F}_\lambda \}_\lambda \) are some finite sets, and (b) a PPT stateful simulator \(\mathbf {Sim}\). Furthermore, the following two properties are required to hold.

  • For all \(\lambda \in \mathbb {N}\) and \(f\in \mathbb {F}_\lambda \), \(\mathbf {Adv}_\mathcal{P}^{\mathcal {A}_{\lambda ,f}}(\lambda )=1\).

  • For all (possibly inefficient) oracle-access probabilistic machines \({\mathcal {B}}^{(\cdot )}\) making at most polynomially many queries to its oracle, we have

    $$ |\Pr _{f\leftarrow \mathbb {F}_\lambda }[1\leftarrow \mathcal {B}^{\mathcal {A}_{\lambda ,f}}(1^\lambda )]-\Pr [1\leftarrow \mathcal {B}^{\mathbf {Sim}(1^\lambda )}(1^\lambda )]|\le { negl}(\lambda ). $$

Definition 5

(Weak simulatable attack). A weak simulatable attack is defined in exactly the same way as a simulatable attack, except that we replace “(possibly inefficient) oracle-access probabilistic machines \(\mathcal{B}^{(\cdot )}\)” with “oracle-access PPT machines \(\mathcal{B}^{(\cdot )}\)”.

Note that in the above definitions, \(\mathbb {F}_\lambda \) is a set, and an adversary is modeled as an ensemble of algorithms \(\mathcal {A}_{\lambda ,f}\) where each instance hardwires an element \(f\in \mathbb {F}_\lambda \). Note also that the simulator \(\mathbf {Sim}\) is only required to simulate the behavior of adversaries \(\mathcal {A}_{\lambda ,f}\) in the situation where f is chosen uniformly at random from \(\mathbb {F}_\lambda \). Like in [68], we set \(\mathbb {F}_\lambda \) as a set of all functions with some specific domain and range when showing our negative results. This ensures that the outputs of f look like real randomness in the view of \(\mathcal {B}\), and they can be simulated by \(\mathbf {Sim}(1^\lambda )\) by executing lazy sampling. We refer the reader to Sects. 3.3 and 4.3 for the details.

The following theorem by Wichs [68] shows that the existence of a (weak) simulatable attack on some cryptographic property \(\mathcal P\) implies the impossibility of deriving the security of \(\mathcal P\) from any (restricted) cryptographic game \(\mathcal G\) via black-box reductions.

Theorem 1

([68]). If there exists a simulatable attack (respectively, weak simulatable attack) on some cryptographic property \(\mathcal P\) and a black-box reduction deriving the security of \(\mathcal P\) from the security of some cryptographic game (respectively, restricted cryptographic game) \(\mathcal G\), then there exists some PPT adversary \(\mathcal {A}\) that has non-negligible advantage in \(\mathcal G\).

We refer the reader to [68] for the proof of Theorem 1. Notice that the original proof in [68] did not show that a weak simulatable attack implies the black-box separation with respect to restricted cryptographic games, and it asks the reduction algorithm to have a noticeable advantage. However, extending it to show the above theorem is straightforward.

3 Impossibility on Provable Deterministic Primitives and Unique-Key Signatures

In this section, we give negative results on tamper-resilient PDPs (including tamper-resilient VUFs, VRFs, and unique signatures as special cases) and tamper-resilient unique-key signatures. Our results show that if a PDP or unique-key signature scheme satisfies some “extremely” weak unpredictability or unforgeability, then there exists no black-box reduction deriving its tamper-resilience in the plain model from any commonly used assumption.

The rest of this section is organized as follows. In Sect. 3.1, we recall the definitions of PDPs and signatures. In Sect. 3.2, we define several security notions. In Sect. 3.3, we show the existence of simulatable attacks on the tamper-resilience of PDPs and signatures. In Sect. 3.4, we summarize our negative results.

3.1 Definitions of PDPs and Signatures

At first, we recall the definition of a PDP, which is formalized in [2] and captures VUFs, VRFs, and unique signatures as special cases.

Definition 6

(Provable deterministic primitive (PDP) [2]). A PDP consists of the polynomial-time (PT) algorithms \((\mathsf {Gen},\mathsf {Comp},\mathsf {Prove},\mathsf {Verify})\). (a) \({\mathsf {Gen}}\) is a probabilistic algorithm that takes as input \(1^\lambda \), and returns a public/secret key pair \((pk,sk)\in \{0,1\}^p\times \{0,1\}^s\) for some polynomials \(p=p(\lambda )\) and \(s=s(\lambda )\). The set of all secret keys (output by \(\mathsf {Gen}(1^\lambda )\)) and the (internal) randomness space of \(\mathsf {Gen}\) are respectively denoted by \(\mathcal {SK}\) and \(\mathcal {R}_g\). (b) \(\mathsf {Comp}\) is a deterministic algorithm that takes as input a secret key sk and \(x\in \mathcal {X}\), where \(\mathcal {X}\) denotes the domain, and returns some value y. (c) \(\mathsf {Prove}\) is a probabilistic algorithm that takes as input a secret key sk and x, and returns a proof \(\pi \). The (internal) randomness space of \(\mathsf {Prove}\) is denoted by \(\mathcal {R}_p\). (d) \(\mathsf {Verify}\) is a deterministic algorithm that takes as input a public key pk, x, y, and a proof \(\pi \), and returns 1 (accept) or 0 (reject).

A PDP is required to satisfy uniqueness and correctness. Uniqueness is said to be satisfied if for all \(\lambda \in \mathbb {N}\), all \(pk\in \{0,1\}^p\) (possibly outside the support of \(\mathsf {Gen}\)) and all \(x\in \mathcal {X}\), there exists no tuple \((y,\pi ,y',\pi ')\) that simultaneously satisfies \(y\ne y'\) and \(\mathsf {Verify}_{pk}(x,y,\pi )=\mathsf {Verify}_{pk}(x,y',\pi ')=1\). Correctness is said to be satisfied if \(\mathsf {Verify}_{pk}(x,\mathsf {Comp}_{sk}(x),\mathsf {Prove}_{sk}(x))=1\) holds for all \(\lambda \in \mathbb {N}\), all \((pk,sk)\leftarrow \mathsf {Gen}(1^\lambda )\), and all \(x\in \mathcal {X}\).

The syntax of a PDP is exactly the same as that of a VRF and a VUF, and it also captures unique signature schemes, which we discuss later in this subsection. Notice that the definition of uniqueness is different from that in [2], which additionally requires the public parameter (including the bilinear group) to be correctly generated since it treats structure-preserving constructions. Other than that we do not consider public parameters separately or some other relaxed uniqueness notions (e.g., uniqueness holding for most public keys), one can see that our definition is equivalent to the original definitions of uniqueness in [54, 55]. Such original definitions prevent denial-of service attacks and provide perfect binding properties when PDPs are used as a special type of commitments in their applications (e.g., micropayments [57], resettable zero-knowledge proofs [56], updatable zero-knowledge databases [52], verifiable transaction escrow schemes [47]), even in the presence of maliciously chosen CRSs and public keys.

We now recall the definition of a (digital) signature scheme.

Definition 7

(Digital signature). A signature scheme consists of the PT algorithms \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\). (a) \({\mathsf {Gen}}\) is a probabilistic algorithm that takes as input \(1^\lambda \), and returns a public/secret key pair \((pk,sk)\in \{0,1\}^p\times \{0,1\}^s\) for some polynomials \(p=p(\lambda )\) and \(s=s(\lambda )\). The set of all secret keys (output by \(\mathsf {Gen}(1^\lambda )\)) and the (internal) randomness space of \(\mathsf {Gen}\) are respectively denoted by \(\mathcal {SK}\) and \(\mathcal {R}_g\). (b) \({\mathsf {Sign}}\) is a probabilistic algorithm that takes as input a secret key sk and a message \(m\in \mathcal {M}\), where \(\mathcal {M}\) is the message space, and returns a signature \(\sigma \). The (internal) randomness space of \(\mathsf {Sign}\) is denoted by \(\mathcal {R}_s\). (c) \({\mathsf {Verify}}\) is a deterministic algorithm that takes as input a public key pk, a message m, and a signature \(\sigma \), and returns 1 (accept) or 0 (reject).

A signature scheme is required to satisfy correctness, which means that \(\mathsf {Verify}_{pk}(m,\sigma )=1\) holds for all \(\lambda \in \mathbb {N}\), all \((pk,sk)\leftarrow \mathsf {Gen}(1^\lambda )\), all \(m\in \mathcal {M}\), and all \(\sigma \leftarrow \mathsf {Sign}_{sk}(m)\).

We now recall the definition of a unique signature scheme, in which the signing algorithm is deterministic and there exists only one valid signature for each pair of public key (not necessarily output by \(\mathsf {Gen}(1^\lambda )\)) and message.

Definition 8

(Unique signature [54]). A signature scheme \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\) is said to be a unique signature scheme if (a) \(\mathsf {Sign}\) is deterministic, and (b) for all \(\lambda \in \mathbb {N}\), all \(pk\in \{0,1\}^p\) (possibly outside the support of \(\mathsf {Gen}\)), and all \(m\in \mathcal {M}\), there exists no pair \((\sigma ,\sigma ')\) that simultaneously satisfies \(\sigma \ne \sigma '\) and \(\mathsf {Verify}_{pk}(m,\sigma )=\mathsf {Verify}_{pk}(m,\sigma ')=1\).

One can easily see that a unique signature scheme can be cast as a PDP where \(\mathsf {Comp}\) is the signing algorithm and \(\mathsf {Prove}\) always returns the empty string.

We now define unique-key signatures, where there exists only one valid secret key for each public key (not necessarily output by \(\mathsf {Gen}(1^\lambda )\)).

Definition 9

(Unique-key signature). A signature scheme \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\) is said to be a unique-key signature scheme if there exists a deterministic PT algorithm \(\mathsf {UKCheck}\) such that (a) \(\mathsf {UKCheck}(pk,sk)=1\) holds for all \(\lambda \in \mathbb {N}\) and all \((pk,sk)\leftarrow \mathsf {Gen}(1^\lambda )\), and (b) for all \(\lambda \in \mathbb {N}\) and all \(pk\in \{0,1\}^p\) (possibly outside the support of \(\mathsf {Gen}\)), there exists no pair \((sk,sk')\in \{0,1\}^s\times \{0,1\}^s\) that simultaneously satisfies \(sk\ne sk'\) and \(\mathsf {UKCheck}(pk,sk)=\mathsf {UKCheck}(pk,sk')=1\).

3.2 Security Notions for PDPs and Signatures

In this subsection, we define several security notions, called weak unpredictability, weak unforgeability, and weak tamper-resilience, for PDPs and signatures.

In [2], two security notions were defined for PDPs: unpredictability and pseudorandomness. A PDP satisfying the former (respectively, latter) security notion is a VUF (respectively, VRF). Moreover, unpredictability is weaker than pseudorandomness (see [2, Lemma 5]). In this paper, we define weak unpredictability, which is weaker than standard unpredictability. This security notion only guarantees that a PPT adversary, which is allowed to neither learn pk nor make any query, cannot output a valid input/output pair.Footnote 11 We also define a similar security notion, called weak unforgeability, for signatures. Furthermore, we define two additional security notions, called statistical weak unpredictability and statistical weak unforgeability, which treat possibly inefficient adversaries. We also refer the reader to the full paper for the standard security of unpredictability and unforgeability.

Definition 10

((Statistical) weak unpredictability). A PDP \((\mathsf {Gen},\mathsf {Comp},\mathsf {Prove},\mathsf {Verify})\) is said to be weakly unpredictable (respectively, statistically weakly unpredictable), if for any PPT adversary (respectively, possibly inefficient adversary) \(\mathcal {A}\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow {\mathsf {Gen}}(1^\lambda ), (x^*,y^*)\leftarrow \mathcal {A}(1^\lambda ): \mathsf {Comp}_{sk}(x^*)=y^*] \le {{ negl}}(\lambda ). \end{aligned} $$

Definition 11

((Statistical) weak unforgeability). A signature scheme \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\) is said to be weakly unforgeable (respectively, statistically weakly unforgeable), if for any PPT adversary (respectively, possibly inefficient adversary) \(\mathcal {A}\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow {\mathsf {Gen}}(1^\lambda ), (m^*,\sigma ^*)\leftarrow \mathcal {A}(1^\lambda ): \mathsf {Verify}_{pk}(m^*,\sigma ^*)=1] \le {{ negl}}(\lambda ). \end{aligned} $$

One can easily see that weak unpredictability and weak unforgeability are much weaker than standard unpredictability and unforgeability notions, respectively. However, this does not straightforwardly mean that statistical weak unpredictability and statistical weak unforgeability are also weak since they treat possibly inefficient adversaries. Now we give two lemmas showing the equivalence between weak unpredictability and statistical weak unpredictability, and that between weak unforgeability and statistical weak unforgeability.

Lemma 1

A PDP satisfies weak unpredictability (against non-uniform adversaries) if and only if it satisfies statistical weak unpredictability.

Proof

(of Lemma 1). Since it is straightforward that statistical weak unpredictability implies weak unpredictability, we focus on the opposite direction.

If a PDP \(\varPhi =(\mathsf {Gen},\mathsf {Comp},\mathsf {Prove},\mathsf {Verify})\) is not statistically weakly unpredictable, then there exists a (possibly inefficient) adversary \(\mathcal {A}\) that, on input a security parameter \(1^\lambda \), can output \(x^*\) and \(y^*\) such that \(\Pr [(pk,sk)\leftarrow {\mathsf {Gen}}(1^\lambda ):y^*=\mathsf {Comp}_{sk}(x^*)]\) is non-negligible. Therefore, we can hard-wire such \((x^*,y^*)\) in a (non-uniform) PPT adversary \(\mathcal {B}\), which can easily break the weak unpredictability of \(\varPhi \) by outputting \((x^*,y^*)\), on input \(1^\lambda \). Hence, weak unpredictability implies statistical weak unpredictability.    \(\square \)

Lemma 2

A signature scheme satisfies weak unforgeability (against non-uniform adversaries) if and only if it satisfies statistical weak unforgeability.

We omit the proof of Lemma 2 since it is exactly the same except that we replace the winning condition \(y^*=\mathsf {Comp}_{sk}(x^*)\) with \(\mathsf {Verify}_{pk}(x^*,y^*)=1\).

We now define weak tamper-resilience (WTR) for PDPs and signatures. An adversary against such security notions consists of three independent components \((\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\). \(\mathbf {Tamper}\) models a tampering function which determines a tampered secret key, on input a public/secret key pair. \(\mathbf {Break}1\) takes as input a public key and makes a computing or signing query. On receiving the answer generated by using the tampered secret key, \(\mathbf {Break}2\) tries to output a valid forgery. For simplicity, we allow the adversary to make only one tampering query and one signing query, and forbid the communication between \(\mathbf {Break}1\) and \(\mathbf {Break}2\). These security notions are (strictly) extremely weak versions of the unforgeability under chosen message and tampering attacks defined in [28, 35] and there have already been several positive results satisfying them [23, 24, 28, 35], except that we treat the plain model. As explained before, the more restrictive the adversary is, the stronger our negative results are. Here, notice that although we treat \(\mathbf {Tamper}\) as a component of the adversary, it is essentially a tampering query made by the adversary since \((\mathbf {Break}_1,\mathbf {Break}_2)\) can neither learn its inputs nor communicate with it. The formal definitions are as follows.

Definition 12

(Weak tamper-resilient (WTR) PDP). A PDP \((\mathsf {Gen},\mathsf {Comp},\mathsf {Prove},\mathsf {Verify})\) is said to satisfy WTR security, if for any PPT adversary \(\mathcal {A}=(\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow \mathsf {Gen}(1^\lambda ),sk'\leftarrow \mathbf {Tamper}(1^\lambda ,pk,sk), x\leftarrow \mathbf {Break}1(1^\lambda ,pk), \\ y=\mathsf {Comp}_{sk'}(x), (x^*,y^*)\leftarrow \mathbf {Break}2(1^\lambda ,pk,y):\\ x^*\ne x\wedge sk'\in \mathcal {SK}\wedge y^*=\mathsf {Comp}_{sk}(x^*)]\le { negl}(\lambda ). \end{aligned} $$

The winning condition that \(sk'\in \mathcal {SK}\) follows [28]. It makes our results stronger since the adversary is more restricted.

Definition 13

(Weak tamper-resilient (WTR) signature). A signature scheme \((\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\) is said to satisfy WTR security, if for any PPT adversary \(\mathcal {A}=(\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow \mathsf {Gen}(1^\lambda ),sk'\leftarrow \mathbf {Tamper}(1^\lambda ,pk,sk), m\leftarrow \mathbf {Break}1(1^\lambda ,pk),\\ \sigma \leftarrow \mathsf {Sign}_{sk'}(m), (m^*,\sigma ^*)\leftarrow \mathbf {Break}2(1^\lambda ,pk,\sigma ):\\ m^*\ne m\wedge sk'\in \mathcal {SK}\wedge \mathsf {Verify}_{pk}(m^*,\sigma ^*)=1]\le { negl}(\lambda ). \end{aligned} $$

Notice that when showing impossibility of black-box reductions for proving WTR security, we need to consider reduction algorithms that can exploit an adversary \(\mathcal {A}=(\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\) in any way they want, i.e., adaptively make queries to \(\mathbf {Tamper}\), \(\mathbf {Break}1\), and \(\mathbf {Break}2\) in any order and for any times (on condition that the total number of queries is polynomial in \(\lambda \)). This is an obstacle we need to overcome in our formal proofs. Also, note that unlike previous works treating arbitrary tampering attacks [23, 24, 28, 35], we define \(\mathbf {Tamper}\) as part of the adversary. However, we do this only for simplicity and this does not make any additional restriction on the reductions we consider. We refer the reader to Sect. 1.3 for the remark on our model for a further discussion.

3.3 Simulatable Attacks for WTR Secure PDPs and Signatures

In this subsection, we focus on showing the existence of simulatable attacks on the WTR security of PDPs and unique signatures, which implies the impossibility of deriving their WTR security from any commonly used assumption via black-box reductions, due to Theorem 1.

Simulatable Attack for WTR Secure PDPs. We start with giving a theorem showing that if a PDP satisfies weak unpredictability, then there exists a simulatable attack on its WTR security.

Theorem 2

For any weakly unpredictable PDP \(\varPhi =(\mathsf {Gen},\mathsf {Comp},\mathsf {Prove},\mathsf {Verify})\), there exists a simulatable attack on the WTR security of \(\varPhi \).

Overview and Idea of the Proof. Let \(\mathbb {F}_\lambda \) be the set of all functions \(f:\{0,1\}^p\rightarrow \mathcal {R}_g\times \mathcal {R}_p\), and \((x,x^*)\) be elements in \(\mathcal {X}\) such that \(x\ne x^*\). We firstly construct an inefficient adversary \(\mathcal {A}\), which is an ensemble of stateless algorithms \(\{\mathcal {A}_{\lambda ,f}=(\mathbf {Tamper}_{\lambda ,f},\mathbf {Break}1_{\lambda ,f},\mathbf {Break}2_{\lambda ,f})\}_{\lambda \in \mathbb {N},f\in \mathbb {F}_\lambda }\). Each \(\mathcal {A}_{\lambda ,f}\) hard-wires \((x,x^*)\) and an element f in \(\mathbb {F}_\lambda \). We show that each \(\mathcal {A}_{\lambda ,f}\) breaks the WTR security of \(\varPhi \). Then we show the existence of a PPT stateful simulator \(\mathbf {Sim}\) that can simulate the behavior of \(\mathcal {A}_{\lambda ,f}\), in the case that f is randomly chosen from \(\mathbb {F}_\lambda \) (and hence is a random function with domain \(\{0,1\}^p\) and range \(\mathcal {R}_g\times \mathcal {R}_p\)). We design \(\mathcal {A}_{\lambda ,f}\) as follows.

\(\mathbf {Tamper}_{\lambda ,f}(1^\lambda ,pk,sk)\) runs \((r_g,r_p)=f(pk)\) and \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\) (where \(r_p\) is an internal random coin of \(\mathsf {Prove}\) that is used when checking the validity of (pksk) which we will explain later). If (pksk) is valid, it outputs \(sk'\) as the tampered key. \(\mathbf {Break}1_{\lambda ,f}(pk)\) outputs x as the computing query. \(\mathbf {Break}2_{\lambda ,f}(pk,y)\) runs \((r_g,r_p)=f(pk)\) and \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\), and checks whether y is a correct answer for x (i.e., whether \(y=\mathsf {Comp}_{sk'}(x)\)). If the check works, it does an exhaustive search to find \((y^*,\pi ^*)\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\), and outputs \((x^*,y^*)\) as a forgery. Otherwise, it aborts. One can see that \(\mathcal {A}_{\lambda ,f}\) breaks the WTR security of \(\varPhi \) due to correctness and uniqueness.

Next we explain why we can have a PPT stateful simulator \(\mathbf {Sim}\) which is indistinguishable from \(\mathcal {A}_{\lambda ,f}\) (where \(f\leftarrow \mathbb {F}_\lambda \)) in the view of any (possibly inefficient) oracle-access machine \(\mathcal {B}\). To make \(\mathbf {Break}2_{\lambda ,f}\) output a forgery (rather than abort), \(\mathcal {B}\) has to answer the query x made by \(\mathbf {Break}1_{\lambda ,f}\). However, without having given \(\mathbf {Tamper}_{\lambda ,f}\) a valid key pair (pksk) previously, \(\mathcal {B}\) would learn no information on \((pk',sk')\). The reason is that \(\mathcal {B}\) only has black-box access to \(\mathbf {Tamper}_{\lambda ,f}\) and cannot see the structure of f. When \(\mathbf {Tamper}_{\lambda ,f}(pk,sk)\) aborts, no information on f(pk) is revealed. In this case, \(\mathcal {B}\) cannot answer x due to the weak unpredictability of \(\varPhi \). Therefore, we can construct \(\mathbf {Sim}\), who simulates the outputs of f by executing lazy sampling (i.e., by randomly choosing \((r_g,r_p)\) and keeping them in its internal list), and outputs a forgery by using a valid key sk having appeared in a \(\mathbf {Tamper}\) query made by \(\mathcal {B}\) previously.Footnote 12 If such sk does not exist, \(\mathbf {Sim}\) aborts. Due to uniqueness, the final outputs of \(\mathbf {Break}2_{\lambda ,f}\) and \(\mathbf {Sim}\) must be identical, which guarantees that the interactions with \(\mathcal {A}_{\lambda ,f}\) and with \(\mathbf {Sim}\) are indistinguishable in the view of \(\mathcal {B}\).

More specifically, we show the indistinguishability by giving hybrid machines \(\mathcal {A}_0\) and \(\mathcal {A}_1\). \(\mathcal {A}_0\) interacts with \(\mathcal {B}\) in the same way as \(\mathcal {A}_{\lambda ,f}\) (where \(f\leftarrow \mathbb {F}_\lambda \)) does, except that it simulates the outputs of f by executing lazy sampling. \(\mathcal {A}_1\) runs in the same way as \(\mathcal {A}_0\) does except that it aborts if \(\mathcal {B}\) makes a \(\mathbf {Break}2\) query (pky), where a valid secret key sk for pk has not appeared in a \(\mathbf {Tamper}\) query previously. The indistinguishability between \(\mathcal {A}_{\lambda ,f}\) and \(\mathcal {A}_0\) follows from the fact that the outputs of f look perfectly random in the view of \(\mathcal {B}\). Furthermore, \(\mathcal {A}_0\) is indistinguishable from \(\mathcal {A}_1\), since without having given \(\mathcal {A}_0\) a valid key pair (pksk) previously, all the \(\mathbf {Break}2\) queries including pk that \(\mathcal {B}\) makes will lead \(\mathcal {A}_0\) to abort. Otherwise, we can construct an adversary breaking the statistical weak unpredictability of \(\varPhi \). However, since statistical weak unpredictability is equivalent to (computational) weak unpredictability (see Lemma 1), such an adversary cannot exist due to the weak unpredictability of \(\varPhi \). At last, we need to show that \(\mathcal {A}_1\) is indistinguishable from \(\mathbf {Sim}\). The only difference between them is that to output a forgery, \(\mathcal {A}_1\) does an exhaustive search, while \(\mathbf {Sim}\) exploits a valid secret key having appeared in a \(\mathbf {Tamper}\) query previously. Due to the uniqueness of \(\varPhi \), their forgeries (with respect to the same pk) must be identical, and hence \(\mathcal {B}\) cannot distinguish the interactions with \(\mathcal {A}_{\lambda ,f}\) and with \(\mathbf {Sim}\).

Notice that we need to ensure that \(\mathbf {Tamper}_{\lambda ,f}\) and \(\mathbf {Sim}\) output a tampered key only if they receive a valid key pair (pksk). Otherwise, \(\mathbf {Sim}\) cannot simulate the behavior of \(\mathcal {A}_{\lambda ,f}\). Nevertheless, one may wonder how they check the validity, since there seems to be no checking algorithm for key pairs. To deal with this issue, we make them verify whether \(\mathsf {Verify}_{pk}(x^*,\mathsf {Comp}_{sk}(x^*),\mathsf {Prove}_{sk}(x^*;r_p))=1\) instead. Although such a procedure does not check the validity of (pksk) directly, it guarantees that if (pksk) passes the verification, then \(\mathbf {Break}2_{\lambda ,f}\) can absolutely find a forgery for \(x^*\) with respect to pk by using its brute force, and \(\mathbf {Sim}\) can output the same forgery by using sk.

Proof

(of Theorem 2). For each \(\lambda \in \mathbb {N}\), let \(\mathbb {F}_\lambda \) be the set of all functions \(f:\{0,1\}^p\rightarrow \mathcal {R}_g\times \mathcal {R}_p\), and x and \(x^*\) be arbitrary elements in \(\mathcal {X}\) such that \(x\ne x^*\). We define an inefficient class of stateless adversaries \(\{\mathcal {A}_{\lambda ,f}=(\mathbf {Tamper}_{\lambda ,f},\mathbf {Break}1_{\lambda ,f},\mathbf {Break}2_{\lambda ,f})\}_{\lambda \in \mathbb {N},f\in \mathbb {F}_\lambda }\), where x and \(x^*\) are hard-wired, as follows.

\(\mathbf {Tamper}_{\lambda ,f}(pk,sk)\):Footnote 13

  1. 1.

    Compute \((r_g,r_p)=f(pk)\).

  2. 2.

    Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

  3. 3.

    If \(\mathsf {Verify}_{pk}(x^*,\mathsf {Comp}_{sk}(x^*),\mathsf {Prove}_{sk}(x^*;r_p))=1\), return \(sk'\). Otherwise, return \(\bot \).

\(\mathbf {Break}1_{\lambda ,f}(pk)\):

  1. 1.

    Return x.

\(\mathbf {Break}2_{\lambda ,f}(pk,y)\):

  1. 1.

    Compute \((r_g,r_p)=f(pk)\).

  2. 2.

    Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

  3. 3.

    If \(y=\mathsf {Comp}_{sk'}(x)\), then do an exhaustive search to find the lexicographically first pair \((y^*,\pi ^*)\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\), and return \((x^*,y^*)\). If \(y\ne \mathsf {Comp}_{sk'}(x)\) or such \((y^*,\pi ^*)\) does not exist, return \(\bot \).

It is clear that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\) implies \(y^*=\mathsf {Comp}_{sk}(x^*)\) if (pksk) is an honestly sampled key pair, due to the correctness and uniqueness of \(\varPhi \). Furthermore, the checks done by \(\mathbf {Tamper}_{\lambda ,f}\) and \(\mathbf {Break}2_{\lambda ,f}\) must work in a WTR security game due to correctness, and we have \(x\ne x^*\). Hence, for each \(f\in \mathbb {F}_\lambda \), \(\mathcal {A}_{\lambda ,f}\) breaks the WTR security of \(\varPhi \) with advantage 1.

We now define the PPT stateful simulator \(\mathbf {Sim}(1^\lambda )\). It internally keeps a list \(\L \) (which is initially empty) as a state, and answers queries as follows.

  • On receiving a \(\mathbf {Tamper}\) query (pksk):

    1. 1.

      Search \((pk,\overline{sk},r_g,r_p)\in \L \).Footnote 14 If the searching process failed (i.e., pk is not currently in \(\L \)), randomly choose \((r_g,r_p)\leftarrow \mathcal {R}_g\times \mathcal {R}_p\), and add \((pk,\overline{sk},r_g,r_p)\) (where \(\overline{sk}=\bot \)) to \(\L \).

    2. 2.

      Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

    3. 3.

      If \(\mathsf {Verify}_{pk}(x^*,\mathsf {Comp}_{sk}(x^*),\mathsf {Prove}_{sk}(x^*;r_p))=1\), replace \(\overline{sk}\) with sk in \(\L \) (if \(\overline{sk}=\bot \)), and return \(sk'\). Otherwise, return \(\bot \).

  • On receiving a \(\mathbf {Break}1\) query pk:

    1. 1.

      Return x.

  • On receiving a \(\mathbf {Break}2\) query (pky):

    1. 1.

      Search \((pk,\overline{sk},r_g,r_p)\in \L \). If the searching process failed or \(\overline{sk}=\bot \), return \(\bot \).

    2. 2.

      Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

    3. 3.

      If \(y=\mathsf {Comp}_{sk'}(x)\), compute \(y^*=\mathsf {Comp}_{\overline{sk}}(x^*)\) and return \((x^*,y^*)\). Otherwise, return \(\bot \).

Let \(\mathcal {B}^{(\cdot )}\) be any (possibly inefficient) oracle-access probabilistic machine that makes at most polynomially many queries. We show that \(|\Pr _{f\leftarrow \mathbb {F}_\lambda }[1\leftarrow \mathcal {B}^{\mathcal {A}_{\lambda ,f}}(1^\lambda )]-\Pr [1\leftarrow \mathcal {B}^{\mathbf {Sim}(1^\lambda )}(1^\lambda )]|\le { negl}(\lambda )\) by giving hybrid machines.Footnote 15

Hybrid Machine \(\mathcal {A}_0\): We define an inefficient stateful machine \(\mathcal {A}_0\) as follows.

  • On receiving a \(\mathbf {Tamper}\) query (pksk):

    1. 1.

      Search \((pk,\overline{sk},r_g,r_p)\in \L \) (where \(\L \) is initialized with \(\emptyset \)). If the searching process failed, randomly choose \((r_g,r_p)\leftarrow \mathcal {R}_g\times \mathcal {R}_p\), and add \((pk,\overline{sk},r_g, r_p)\) (where \(\overline{sk}=\bot \)) to \(\L \).

    2. 2.

      Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

    3. 3.

      If \(\mathsf {Verify}_{pk}(x^*,\mathsf {Comp}_{sk}(x^*),\mathsf {Prove}_{sk}(x^*;r_p))=1\), replace \(\overline{sk}\) with sk in \(\L \) (if \(\overline{sk}=\bot \)), and return \(sk'\). Otherwise, return \(\bot \).

  • On receiving a \(\mathbf {Break}1\) query pk:

    1. 1.

      Return x.

  • On receiving a \(\mathbf {Break}2\) query (pky):

    1. 1.

      Search \((pk,\overline{sk},r_g,r_p)\in \L \). If the searching process failed, then randomly choose \((r_g,r_p)\leftarrow \mathcal {R}_g\times \mathcal {R}_p\), and add \((pk,\bot ,r_g,r_p)\) to \(\L \).

    2. 2.

      Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

    3. 3.

      If \(y=\mathsf {Comp}_{sk'}(x)\), then do an exhaustive search to find the lexicographically first pair \((y^*,\pi ^*)\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\), and return \((x^*,y^*)\). If \(y\ne \mathsf {Comp}_{sk'}(x)\) or such \((y^*,\pi ^*)\) does not exist, return \(\bot \).

Lemma 3

\(\Pr _{f\leftarrow \mathbb {F}_\lambda }[1\leftarrow \mathcal {B}^{\mathcal {A}_{\lambda ,f}}(1^\lambda )]=\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )]\).

Proof

(of Lemma 3). Since f (used by \(\mathcal {A}_{\lambda ,f}\)) is randomly sampled from \(\mathbb {F}_\lambda \), the outputs of f are perfect randomness in the view of \(\mathcal {B}\). Moreover, what \(\mathcal {A}_0\) does is just simulating f by executing lazy sampling. Then this lemma follows from the fact that the views of \(\mathcal {B}\) in the interactions with \(\mathcal {A}_{\lambda ,f}\) (where \(f\leftarrow \mathbb {F}_\lambda \)) and with \(\mathcal {A}_0\) are identical.    \(\square \)

Hybrid Machine \(\mathcal {A}_1\): We now define an inefficient stateful machine \(\mathcal {A}_1\) which is the same as \(\mathcal {A}_0\), except that on receiving a \(\mathbf {Break}2\) query (pky), \(\mathcal {A}_1\) aborts if there is no valid secret key \(\overline{sk}\) for pk stored in \(\L \). In other words, \(\mathcal {A}_1\) aborts if pk has not appeared in a \(\mathbf {Tamper}\) query previously, the answer of which is not \(\bot \). Formally, on receiving a \(\mathbf {Break}2\) query (pky), \(\mathcal {A}_1\) runs as follows. (Bellow, the difference from \(\mathcal {A}_0\) is only in Step 1, and is \(emphasized \).)

  1. 1.

    Search \((pk,\overline{sk},r_g,r_p)\in \L \). If the searching process failed or \(\overline{sk}=\bot \), return \(\bot \).

  2. 2.

    Compute \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\).

  3. 3.

    If \(y=\mathsf {Comp}_{sk'}(x)\), do an exhaustive search to find the lexicographically first pair \((y^*,\pi ^*)\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\), and return \((x^*,y^*)\). If \(y\ne \mathsf {Comp}_{sk'}(x)\) or such \((y^*,\pi ^*)\) does not exist, return \(\bot \).

Lemma 4

\(|\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_1(1^\lambda )}(1^\lambda )]-\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )]|\le { negl}(\lambda )\).

Proof

(of Lemma 4). Let E be the event that during the execution of \(\mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )\), \(\mathcal {B}\) makes a \(\mathbf {Break}2\) query (pky) such that

  • pk has not appeared in a valid \(\mathbf {Tamper}\) query (the answer of which is not \(\bot \)) previously, and

  • the answer to this \(\mathbf {Break}2\) query is not \(\bot \).

If E does not occur during the execution of \(\mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )\), then the view of \(\mathcal {B}\) is identical to its view in the interaction with \(\mathcal {A}_1\). Therefore, we have \(|\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_1(1^\lambda )}(1^\lambda )]-\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )]|\le \Pr [E]\). It remains to show that \(\Pr [E]\) is negligible.

Claim 1

Let the total number of (all kinds of) queries made by \(\mathcal {B}\) be q. Then there exists an inefficient adversary \(\mathcal{E}\) that breaks the statistical weak unpredictability of \(\varPhi \) with probability at least \(\Pr [E]/q^2\).

Proof Idea of Claim 1. A tampered key is generated as \((pk',sk')\leftarrow \mathsf {Gen}(1^\lambda ;r_g)\) where \(r_g\) is a randomness linked with a public key pk. If \(\mathcal {B}\) does not make a valid \(\mathbf {Tamper}\) query (which does not lead \(\mathcal {A}_0\) to output \(\bot \)) including pk, it would learn no information on \((pk',sk')\).Footnote 16 In this case, making a \(\mathbf {Break}2\) query, such that pk is included in this query but \(\mathcal {A}_0\) does not return \(\bot \), means forging a valid input/output pair under \(pk'\), due to the check done at the third step in the response to a \(\mathbf {Break}2\) query. Therefore, \(\mathcal {B}\) can be used to break the statistical weak unpredictability of \(\varPhi \). The formal proof is as follows.

Proof

(of Claim 1). The description of \(\mathcal{E}\) is as follows.

The challenger samples \((pk',sk')\leftarrow \mathsf {Gen}(1^\lambda )\) and gives \(1^\lambda \) to \(\mathcal{E}\). Then \(\mathcal{E}\) randomly chooses \(\hat{i},\hat{j}\leftarrow \{1,\cdots ,q\}\), and interacts with \(\mathcal {B}\) in the same way as \(\mathcal {A}_0\) does (during the execution of \(\mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )\)), except that

  1. 1.

    From the \(\hat{i}\)th query, every time when receiving a \(\mathbf {Tamper}\) or \(\mathbf {Break}2\) query including pk used in the \(\hat{i}\)th query, \(\mathcal{E}\) returns \(\bot \) to \(\mathcal {B}\) (except for the \(\hat{j}\)th query).Footnote 17

  2. 2.

    On receiving the \(\hat{j}\)th query, if it is a \(\mathbf {Break}2\) query denoted by (pky), \(\mathcal{E}\) returns (xy) to the challenger and terminates. Otherwise \(\mathcal E\) aborts.

Now we argue that \(\mathcal{E}\) returns (xy) such that \(y=\mathsf {Comp}_{sk'}(x)\) with probability at least \(\Pr [E]/q^2\).

During the execution of \(\mathcal {B}^{\mathcal {A}_0(1^\lambda )}(1^\lambda )\), when E occurs, \(\mathcal {B}\) must have made a \(\mathbf {Break}2\) query (pky), such that (a) there exists no entry \((pk, \overline{sk}, r_g, r_p) \in \L \) such that \(\overline{sk} \ne \bot \), but (b) \(\mathcal {A}_0\) does not return \(\bot \) as its answer. Therefore, for randomly sampled \(\hat{i},\hat{j}\leftarrow \{1,\cdots ,q\}\) (not learnt by \(\mathcal {B}\)), when E occurs, the probability that

  • the \(\hat{j}\)th query is the first \(\mathbf {Break}2\) query satisfying the above two conditions (a) and (b), and

  • the \(\hat{j}\)th query includes pk, which is firstly used in the \(\hat{i}\)th query,

is at least \(1/q^2\). Since \(\mathcal{E}\) perfectly simulates \(\mathcal {A}_0\) in the interaction with \(\mathcal {B}\) in this case (till the termination), it returns (xy) such that \(y=\mathsf {Comp}_{sk'}(x)\) with probability at least \(\Pr [E]/q^2\). Notice that in this case, \(r_g\) linked with pk (used in the \(\hat{j}\)th query) is not used by \(\mathcal{E}\), and we view \((pk',sk')\) (generated by the challenger and not learnt by \(\mathcal{E}\)) as the key pair generated by using \(r_g\). This completes the proof of Claim 1.    \(\square \)

Due to the equivalence between weak unpredictability and statistical weak unpredictability (see Lemma 1) and the assumption that \(\varPhi \) is weakly unpredictable, \(\Pr [E]/q^2\) is negligible. Therefore, \(\Pr [E]\) is negligible, completing the proof of Lemma 4.    \(\square \)

Lemma 5

\(\Pr [1\leftarrow \mathcal {B}^{\mathbf {Sim}(1^\lambda )}(1^\lambda )]=\Pr [1\leftarrow \mathcal {B}^{\mathcal {A}_1(1^\lambda )}(1^\lambda )]\).

Proof

(of Lemma 5). The difference between \(\mathbf {Sim}\) and \(\mathcal {A}_1\) is that to return a correct input/output pair, \(\mathbf {Sim}\) computes \(y^*=\mathsf {Comp}_{\overline{sk}}(x^*)\) while \(\mathcal {A}_1\) does an exhaustive search to find the lexicographically first pair \((y^*,\pi ^*)\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\). One can see that during the execution of \(\mathcal {B}^{\mathcal {A}_1(1^\lambda )}(1^\lambda )\) and \(\mathcal {B}^{\mathbf {Sim}(1^\lambda )}(1^\lambda )\), \(\mathsf {Verify}_{pk}(x^*,\mathsf {Comp}_{\overline{sk}}(x^*),\mathsf {Prove}_{\overline{sk}}(x^*;r_p))=1\) holds for all \((pk,\overline{sk},r_g,r_p)\in \L \) where \(\overline{sk}\ne \bot \), due to the check done in the third step in the response to a \(\mathbf {Tamper}\) query. Furthermore, there exists only one \(y^*\) such that \(\mathsf {Verify}_{pk}(x^*,y^*,\pi ^*)=1\) for some \(\pi ^*\), due to the uniqueness of \(\varPhi \). Therefore, the forgeries output by \(\mathbf {Sim}\) and \(\mathcal {A}\) with respect to the same pk are the same, i.e., the views of \(\mathcal {B}\) in the interactions with \(\mathbf {Sim}\) and with \(\mathcal {A}_1\) are identical.    \(\square \)

Due to Lemmas 3 to 5, we have \(|\Pr _{f\leftarrow \mathbb {F}_\lambda }[1\leftarrow \mathcal {B}^{\mathcal {A}_{\lambda ,f}}(1^\lambda )]-\Pr [1\leftarrow \mathcal {B}^{\mathbf {Sim}(1^\lambda )}(1^\lambda )]|\le { negl}(\lambda )\), completing the proof of Theorem 2.    \(\square \)

Simulatable Attack for WTR Secure Unique-Key Signatures. Next we give our negative result on tamper-resilient unique-key signatures.

Theorem 3

For any weakly unforgeable unique-key signature scheme \(\varSigma =(\mathsf {Gen},\mathsf {Sign},\mathsf {Verify})\), there exists a simulatable attack on the WTR security of \(\varSigma \).

We refer the reader to the full paper for the adversaries and simulator in this simulatable attack. Unlike those in the proof of Theorem 2, the adversaries and simulator in this simulatable attack check the validity of a key pair in a direct way by using the key checking algorithm \(\mathsf {UKCheck}\), and the adversaries exhaustively search a secret key and forge a signature by using it. We omit the proof of the indistinguishability between the adversaries and simulator since it can be done by slightly modifying the proof of Theorem 2.

3.4 Summary of Negative Results on PDPs and Signatures

Since we have shown the existence of simulatable attacks on the WTR security of any weakly unpredictable PDP and any weakly unforgeable unique-key signature scheme, we obtain the following corollary, which is the first main result in our paper. It directly follows from Theorems 12, and 3.

Corollary 1

Let \(\varPhi \) (respectively, \(\varSigma \)) be a PDP (respectively, a unique-key signature scheme). If \(\varPhi \) (respectively, \(\varSigma \)) is weakly unpredictable (respectively, weakly unforgeable), then there exists no black-box reduction deriving the WTR security of \(\varPhi \) (respectively, \(\varSigma \)) from the security of any cryptographic game.

This corollary implies the impossibility of deriving the (even extremely weak) tamper-resilience of PDPs and unique-key signatures from any common assumption, via black-box reductions.

As explained before, VRFs, VUFs, and unique signatures can be cast as PDPs. Furthermore, the security notions for VRFs and VUFs, called pseudorandomness and unpredictability respectively (see [2] for the formal definitions), are stronger than weak unpredictability, and a weakly unforgeable unique signature scheme can be viewed as a weakly unpredictable PDP. Hence, we obtain the following corollary derived from Corollary 1.

Corollary 2

Let \(\varPhi \) be a VRF, a VUF, or a weakly unforgeable unique signature scheme. There exists no black-box reduction deriving the WTR security of \(\varPhi \) from the security of any cryptographic game.

Negative Result on Injective OWFs. In [68], Wichs gave a negative result on leakage-resilient unique-key one-way relations. One may also wonder whether tamper-resilient injective OWFs are achievable. Indeed, Faonio and Venturi [28] implicitly used a tamper-resilient OWF, and we can follow them to combine a tamper-resilient injective OWF with the framework in [25] to obtain tamper-resilient unique-key signatures in the CRS model, where it is assumed that tamper-proof public parameters are available. However, a simplified version of the proof of Theorem 2 can be adopted to show the non-existence of black-box reductions deriving the tamper-resilience of OWFs from the security of any cryptographic game. We refer the reader to the full paper for the definition, security notion, and simulatable attack for tamper-resilient injective OWFs. Notice that this negative result does not imply those on unique-key signatures and encryption schemes, since a public/secret key pair is not necessarily an output/input pair of an OWF, and a signature or a ciphertext does not necessarily include public keys.

Negative Result on Re-randomizable Signatures. In [43], Hofheinz et al. extended the negative results on tight security of unique signatures, which was firstly proved by Coron [19] and then fixed by Kakvi and Kiltz [48], for the notion of re-randomizable signatures. This notion is more general and hence captures more instantiations, such as the Waters signature scheme [66] and its variant [43]. We argue that such an extension can also be adopted in our case. Namely, by slightly changing the proof of Theorem 2, we can prove the impossibility on re-randomizable signatures with tamper-resilience. Intuitively, like the uniqueness of a PDP, which guarantees that the outputs of the adversary and the simulator are the same, the re-randomizability guarantees that their (re-randomized) outputs are indistinguishable in the view of any (possibly inefficient) distinguisher. We refer the reader to the full paper for the definition of re-randomizable signatures and the simulatable attack on their WTR security. We omit the formal proof since it can be done by slightly modifying the proof of Theorem 2.

We can also extend the negative result on unique-key signatures for signatures with key re-randomization [5], and the same extension can be adopted for the above mentioned result on injective OWFs and our result on PKE schemes introduced in the next section. These extensions are similar to the above one and hence we omit the details.

Remark on Self-destructive and Key-Updating Mechanisms. The tamper-resilience notions of many constructions are guaranteed by self-destructive or key-updating mechanism (e.g., [23, 27, 35, 37, 49]). The former allows a device to erase all internal data when tampering is detected, so that an adversary cannot obtain any information from further queries. The latter allows a device to update its secret information. Since we treat adversaries that only make one tampering query and one computing query, our results (in both this section and the next section) capture tamper-resilient primitives with these two mechanisms as well.

4 Impossibility on Unique-Message PKE Schemes

In this section, we give a negative result on unique-message PKE schemes. Our result shows that if a unique-message PKE scheme satisfies some “extremely” weak security, then there exists no black-box reduction deriving its tamper-resilience in the plain model from any commonly used falsifiable assumption.

The rest of this section is organized as follows. In Sect. 4.1, we give the definition of a unique-message PKE scheme. In Sect. 4.2, we define several security notions. In Sect. 4.3, we show the existence of a weak simulatable attack on the tamper-resilience of unique-message PKE schemes. In Sect. 4.4, we summarize our negative result.

4.1 Definition of PKE Schemes

We now give the definition of a PKE scheme.

Definition 14

(Public key encryption (PKE)). A PKE scheme consists of the PT algorithms \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\). (a) \({\mathsf {Gen}}\) is a probabilistic algorithm that takes as input \(1^\lambda \), and returns a public/secret key pair \((pk,sk)\in \{0,1\}^p\times \{0,1\}^s\) for some polynomials \(p=p(\lambda )\) and \(s=s(\lambda )\). The set of all secret keys (output by \(\mathsf {Gen}(1^\lambda )\)) and the (internal) randomness space of \(\mathsf {Gen}\) are denoted by \(\mathcal {SK}\) and \(\mathcal {R}_g\), respectively. (b) \({\mathsf {Enc}}\) is a probabilistic algorithm that takes as input a public key pk and a message \(m\in \mathcal {M}\), where \(\mathcal M\) is the message space, and returns a ciphertext \(ct\in \{0,1\}^{c}\) for some polynomial \(c=c(\lambda )\). The (internal) randomness space of \(\mathsf {Enc}\) is denoted by \(\mathcal {R}_e\). (c) \({\mathsf {Dec}}\) is a deterministic algorithm that takes as input a secret key sk and a ciphertext ct, and returns a message \(m\in \mathcal {M}\) or \(\bot \).

A PKE scheme is required to satisfy correctness, which means that \(\mathsf {Dec}_{sk}(ct)=m\) holds for all \(\lambda \in \mathbb {N}\), all \((pk,sk)\leftarrow \mathsf {Gen}(1^\lambda )\), all \(m\in \mathcal {M}\), and all \(ct\leftarrow \mathsf {Enc}_{pk}(m)\).

We now define unique-message PKE schemes. For a unique-message PKE scheme, there exists a plaintext-recovering algorithm which can correctly decrypt a ciphertext, and it is required that when using it to decrypt a ciphertext (possibly outside the support of \(\mathsf {Enc}\)), all valid secret keys with respect to a public key (possibly outside the support of \(\mathsf {Gen}\)) should lead to the same decryption result.

Definition 15

(Unique-message PKE). A PKE scheme \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\) is said to be a unique-message PKE scheme if there exist two deterministic PT algorithm \((\mathsf {UMCheck},\mathsf {Rec})\) such that (a) \(\mathsf {UMCheck}(pk,sk)=1\) holds for all \(\lambda \in \mathbb {N}\) and all \((pk,sk)\leftarrow \mathsf {Gen}(1^\lambda )\), (b) \((\mathsf {Gen},\mathsf {Enc},\mathsf {Rec})\) satisfies correctness, and (c) for all \(\lambda \in \mathbb {N}\), all \(pk\in \{0,1\}^p\), and all \(ct\in \{0,1\}^c\) (possibly outside the support of \(\mathsf {Gen}\) and \(\mathsf {Enc}\) respectively), there exists no pair \((sk,sk')\in \{0,1\}^s\times \{0,1\}^s\) that simultaneously satisfies \(\mathsf {UMCheck}(pk,sk)=\mathsf {UMCheck}(pk,sk')=1\) and \(\mathsf {Rec}_{sk}(ct)\ne \mathsf {Rec}_{sk'}(ct)\).

For ease of understanding the rationale behind the above definition, as an instance, we show how it captures the Cramer-Shoup scheme [20] in the full paper. Furthermore, we argue that this definition captures all the unique-key PKE schemes, which are defined in the same way as unique-key signatures, namely, there exists only one valid secret key for each public key (not necessarily output by \(\mathsf {Gen}(1^\lambda )\)). We refer the reader to the full paper for the formal definition and the proof that all unique-key PKE schemes fall under the definition of unique-message PKE schemes.

4.2 Security Notions for PKE Schemes

In this subsection, we define weak one-wayness and WTR security for PKE schemes.

At first, we give the definition of weak one-wayness. Such security only guarantees that an adversary, which is allow to neither learn pk nor make any decryption query, cannot recover a message from a randomly generated ciphertext.

Definition 16

(Weak one-wayness). A PKE scheme \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\) is said to be weakly one-way, if for any PPT adversary \(\mathcal {A}\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow {\mathsf {Gen}}(1^\lambda ), m\leftarrow \mathcal{M}, ct\leftarrow \mathsf {Enc}_{pk}(m), \\m'\leftarrow \mathcal {A}(1^\lambda ,ct): m=m'] \le {{ negl}}(\lambda ). \end{aligned} $$

Remark. We can also define statistical weak one-wayness by replacing “PPT adversary” with “(possibly inefficient) adversary”. However, unlike the cases of unpredictability and unforgeability notions defined for PDPs and signatures, statistical weak one-wayness is not necessarily implied by weak one-wayness. Specifically, for a one-way PKE scheme, one can see that its security does not change if we put the public key into the ciphertext. The one-way security of the original PKE scheme implies the weak one-wayness of this modified scheme. However, if the public key is part of the ciphertext, the scheme cannot be statistically weakly one-way because an inefficient adversary can find a secret key corresponding to the public key and decrypt the challenge ciphertext.

We now define WTR security for PKE schemes. Like WTR security for signatures, an adversary in the security game consists of three independent components \((\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\). \(\mathbf {Tamper}\) determines a tampered secret key, on input a public/secret key pair. \(\mathbf {Break}1\) takes as input a public key and makes a decryption query. On receiving the answer generated by using the tampered secret key and a challenge ciphertext, \(\mathbf {Break}2\) tries to decrypt the challenge ciphertext. Similarly to WTR security for signatures, we allow the adversary to make only one tampering query and one decryption query, and forbid the communication between \(\mathbf {Break}1\) and \(\mathbf {Break}2\). If the message space is super-polynomially large, one may consider this security notion as an extremely weak version of the indistinguishability under chosen ciphertext attacks and tampering attacks defined in [28, 35], except that we treat the plain model. As noted before, the more restrictive the adversary is, the stronger our negative result is. The formal definition is as follows.

Definition 17

(Weak tamper-resilient (WTR) PKE). A PKE scheme \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\) is said to satisfy WTR security, if for any PPT adversary \(\mathcal {A}=(\mathbf {Tamper},\mathbf {Break}1,\mathbf {Break}2)\), we have

$$ \begin{aligned} \Pr [(pk,sk)\leftarrow \mathsf {Gen}(1^\lambda ),sk'\leftarrow \mathbf {Tamper}(1^\lambda ,pk,sk), ct\leftarrow \mathbf {Break}1(1^\lambda ,pk),\\ m=\mathsf {Dec}_{sk'}(ct),m^*\leftarrow \mathcal {M}, ct^*\leftarrow \mathsf {Enc}_{pk}(m^*),m'\leftarrow \mathbf {Break}2(1^\lambda ,pk,m,ct^*):\\ sk'\in \mathcal {SK}\wedge m'=m^*]\le { negl}(\lambda ). \end{aligned} $$

4.3 Weak Simulatable Attack for WTR Secure Unique-Message PKE Schemes

Next we give a theorem showing the existence of a weak simulatable attack on the WTR security of unique-message PKE schemes satisfying weak one-wayness. The basic idea of the proof is similar to that of Theorem 2. The main difference is that in this case, the indistinguishability between the inefficient adversary \(\mathcal {A}\) and the PPT simulator \(\mathbf {Sim}\) follows from the fact that without making a valid \(\mathbf {Tamper}\) query (which does not lead \(\mathcal {A}\) to output \(\bot \)), the distinguisher cannot answer a decryption query (rather than a signing query). Similarly to the simulatable attack for Theorem 3 (see the full paper for details), \(\mathcal {A}\) and \(\mathbf {Sim}\) check the validity of a key pair in a direct way, by using the key checking algorithm \(\mathsf {UMCheck}\).

Theorem 4

For any weakly one-way unique-message PKE scheme \(\varPi =(\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\), there exists a weak simulatable attack on its WTR security.

We give the overview and idea of the proof of Theorem 4 as below, and refer the reader to the full paper for details.

Overview and Idea of the Proof of Theorem 4. Let \(\mathsf {UMCheck}\) and \(\mathsf {Rec}\) be the key checking algorithm and plaintext-recovering algorithm of \(\varPi \), respectively, and \(\mathbb {F}_\lambda \) be the set of all functions \(f:\{0,1\}^p\rightarrow \mathcal {R}_g\times \mathcal {R}_e\times \mathcal {M}\). We firstly construct an inefficient adversary \(\mathcal {A}\), which is an ensemble of stateless algorithms \(\{\mathcal {A}_{\lambda ,f}=(\mathbf {Tamper}_{\lambda ,f},\mathbf {Break}1_{\lambda ,f},\mathbf {Break}2_{\lambda ,f})\}_{\lambda \in \mathbb {N},f\in \mathbb {F}_\lambda }\). Each \(\mathcal {A}_{\lambda ,f}\) hard-wires an element f in \(\mathbb {F}_\lambda \). We show that each \(\mathcal {A}_{\lambda ,f}\) breaks the WTR security of \(\varPi \), and there exists a PPT stateful simulator \(\mathbf {Sim}\) that can simulate the behavior of \(\mathcal {A}_{\lambda ,f}\), in the case that f is chosen uniformly at random from \(\mathbb {F}_\lambda \) (and hence is a random function with domain \(\{0,1\}^p\) and range \(\mathcal {R}_g\times \mathcal {R}_e\times \mathcal {M}\)). We design \(\mathcal {A}_{\lambda ,f}\) as follows.

\(\mathbf {Tamper}_{\lambda ,f}(1^\lambda ,pk,sk)\) runs \((r_g,r_e,m)=f(pk)\) and \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\). If (pksk) is valid (i.e., \(\mathsf {UMCheck}(pk,sk)=1\)), it outputs \(sk'\) as the tampered key. \(\mathbf {Break}1_{\lambda ,f}(pk)\) runs \((r_g,r_e,m)=f(pk)\) and outputs \(ct=\mathsf {Enc}_{pk'}(m;r_e)\) as the decryption query. \(\mathbf {Break}2_{\lambda ,f}(pk,m',ct^*)\) runs \((r_g,r_e,m)=f(pk)\) and \((pk',sk')=\mathsf {Gen}(1^\lambda ;r_g)\), and checks whether the answer of the decryption query is correct (i.e., whether \(m'=m\)). If the check works, it does an exhaustive search to find a secret key sk such that \(\mathsf {UMCheck}(pk,sk)=1\), and outputs \(m^*=\mathsf {Rec}_{sk}(ct^*)\). Otherwise, it aborts. One can see that \(\mathcal {A}_{\lambda ,f}\) breaks the WTR security of \(\varPi \) due to the correctness of \(\varPi \) and \((\mathsf {Gen},\mathsf {Enc},\mathsf {Rec})\), and the unique-message property of \(\varPi \).

Next we explain why we can have a PPT stateful simulator \(\mathbf {Sim}\) which is indistinguishable from \(\mathcal {A}_{\lambda ,f}\) (where \(f\leftarrow \mathbb {F}_\lambda \)) in the view of any oracle-access PPT machine \(\mathcal {B}\). To make \(\mathbf {Break}2_{\lambda ,f}\) recover \(ct^*\) (rather than abort), \(\mathcal {B}\) has to answer the query ct made by \(\mathbf {Break}1_{\lambda ,f}\). However, without having given \(\mathbf {Tamper}_{\lambda ,f}\) a valid key pair (pksk) previously, ct is just a randomly generated ciphertext in the view of \(\mathcal {B}\), and \(\mathcal {B}\) learns no information on \((pk',sk')\). The reason is that \(\mathcal {B}\) only has black-box access to \(\mathbf {Tamper}_{\lambda ,f}\) and cannot see the structure of f. When \(\mathbf {Tamper}_{\lambda ,f}(pk,sk)\) aborts, no information on f(pk) is revealed. In this case, \(\mathcal {B}\) cannot answer ct due to the weak one-wayness of \(\varPi \). Therefore, we can construct \(\mathbf {Sim}\), who simulates the outputs of f by executing lazy sampling, and decrypts \(ct^*\) by using a valid key having appeared in a \(\mathbf {Tamper}\) query made by \(\mathcal {B}\) previouly. If such a key does not exist, \(\mathbf {Sim}\) aborts. Due to the unique-message property, the final outputs of \(\mathbf {Break}2_{\lambda ,f}\) and \(\mathbf {Sim}\) must be identical, which guarantees that the interactions with \(\mathcal {A}_{\lambda ,f}\) and with \(\mathbf {Sim}\) are indistinguishable in the view of \(\mathcal {B}\).

More specifically, we show the indistinguishability by giving hybrid machines \(\mathcal {A}_0\) and \(\mathcal {A}_1\). \(\mathcal {A}_0\) interacts with \(\mathcal {B}\) in the same way as \(\mathcal {A}_{\lambda ,f}\) (where \(f\leftarrow \mathbb {F}_\lambda \)) does, except that it simulates the outputs of f by executing lazy sampling. \(\mathcal {A}_1\) runs in the same way as \(\mathcal {A}_0\) does except that on receiving a \(\mathbf {Break}2\) query \((pk,m',ct^*)\) where \(m'\) is a correct answer for the decryption query, if a valid secret key sk for pk has appeared in a \(\mathbf {Tamper}\) query previously, \(\mathcal {A}_1\) directly computes \(\mathsf {Rec}_{sk}(ct^*)\). Otherwise, it exhaustively searches sk and then computes \(\mathsf {Rec}_{sk}(ct^*)\). The indistinguishability between \(\mathcal {A}_{\lambda ,f}\) and \(\mathcal {A}_0\) follows from the fact that the outputs of f look perfectly random in the view of \(\mathcal {B}\). Furthermore, due to the unique-message property of \(\varPi \), the decryption results output by \(\mathcal {A}_0\) and \(\mathcal {A}_1\) (with respect to the same pk and \(ct^*\)) must be identical. Hence, \(\mathcal {A}_0\) is indistinguishable from \(\mathcal {A}_1\) in the view of \(\mathcal {B}\). At last, we need to show that \(\mathcal {A}_1\) is indistinguishable from \(\mathbf {Sim}\). The only difference between them is that when \(\mathcal {B}\) makes a \(\mathbf {Break}2\) query \((pk,m',ct^*)\), where a valid secret key sk for pk has not appeared in a \(\mathbf {Tamper}\) query previously, \(\mathcal {A}_1\) checks whether \(m'\) is a correct answer of the decryption query and exhaustively searches a secret key to decrypt \(ct^*\) (if the check works), while \(\mathbf {Sim}\) just aborts. Then the indistinguishability between \(\mathcal {A}_1\) and \(\mathbf {Sim}\) follows from the fact that without having given \(\mathcal {A}_1\) a valid key pair (pksk) previously, all the \(\mathbf {Break}2\) queries including pk that \(\mathcal {B}\) makes will lead \(\mathcal {A}_1\) to abort. Otherwise, we can construct an adversary breaking the weak one-wayness of \(\varPi \).

Remark. By slightly modifying the above proof, we can also prove that if a unique-message PKE scheme satisfies statistical weak one-wayness, then there exists a simulatable attack on its WTR security.

Furthermore, it is clear that for any honestly generated ciphertext, any honestly generated secret key should lead to the same decryption result, due to correctness. Therefore, we can slightly modify the proof of Theorem 4 to show that for a weakly one-way (respectively, statistically weakly one-way) PKE scheme \((\mathsf {Gen},\mathsf {Enc},\mathsf {Dec})\), if there exists an algorithm that can efficiently check whether a secret key and a ciphertext are in the support of \(\mathsf {Gen}\) and \(\mathsf {Enc}\) (with respect to a public key) respectively, then there exists a weak simulatable attack (respectively, (standard) simuatable attack) on its WTR security.

4.4 Summary of a Negative Result on Unique-Message PKE

Since we have shown the existence of a weak simulatable attack on the WTR security of any weakly one-way unique-message PKE schemes in the last subsection, we obtain the following corollary, which is the second main result in our paper. It follows directly from Theorems 1 and 4.

Corollary 3

Let \(\varPi \) be a unique-message PKE scheme. If \(\varPi \) is weakly one-way, then there exists no black-box reduction deriving the WTR security of \(\varPi \) from the security of any restricted cryptographic game.

This result implies the impossibility of deriving the (even extremely weak) tamper-resilience of a large class of PKE schemes from any common falsifiable assumption, via black-box reductions.

5 Open Problems

Primitives Circumventing Our Results. Our impossibility results capture a wide class of tamper-resilient cryptographic primitives in the plain model, while it remains open whether achieving ones (against arbitrary tampering attacks) outside of the class is possible. One may try to circumvent our results by giving a tamper-resilient PKE scheme, which is weakly one-way but not statistically weakly one-way, under non-falsifiable assumptions. Furthermore, it might be promising to combine such a PKE scheme with an NIZK proof system against untrusted CRSs [9] to construct a tamper-resilient signature scheme, under the framework proposed by Dodis et al. [25].

Impossibility on Primitives in the CRS Model. One may also wonder whether it is possible to construct primitives captured by our negative results in the CRS model (where the existence of tamper-proof public parameters is assumed). If we strengthen the notions of statistical weak unpredictability and unforgeability, and weak one-wayness by allowing the public parameter to be a maliciously generated one (rather than an honestly sampled one), then our impossibility results also hold in this model. However, it is apparent that such security notions are not implied by standard ones. Studying PKE schemes satisfying standard security but not satisfying ones against maliciously generated public parameters might serve as a starting point.