1 Introduction

Group signatures [5, 17] extend conventional signatures to protect the signers’ identity. Signers remain anonymous within the anonymity set defined by the members of a group formed by users who request to join and are accepted by the manager. Anyone with the group public key can verify signatures. To avoid abusing anonymity, an opener can usually re-identify the signer of any signature. This enables accountability and further processing if data needs to be more identifiable or linked, but requires full trust on the opener to ensure privacy.

Schemes with Trusted Openers. To reduce this dependency, alternatives quickly sprouted. In group signatures with Verifier Local Revocation, verifiers can keep local lists of revoked signers, not requiring them to open incoming signatures [10]. Traceable signatures [18, 24] add an extra trusted entity who, after opening a signature by any given member, can produce member-specific trapdoors that can be used to link signatures originating by them. Convertably linkable signatures remove the opener, but incorporate a party who can (non-transitively) blindly link signatures within sets of queried signatures [23]. Recently, also blind variants for central opening have been proposed [26]. Still, all these alternatives use some sort of central entity for opening or linking, which needs to be fully trusted to ensure privacy. While this trust can be distributed [13], this still gives control to a set of central entities rather than users.

Schemes with User-Controlled Linkability. Instead of relying on trusted parties, it may suffice to let signers control which signatures will be linkable, and when. This is also ideal from a privacy perspective, as users retain full control. In this vein, Direct Anonymous Attestation (DAA) [6, 12] and anonymous credential systems [15], also aimed at preserving signer/holder privacy, follow this approach. They enable user-controlled linkability through deterministically computed pseudonyms (from a scope and the user’s key) within each signature. This makes all signatures for the same scope automatically linkable. Otherwise, they remain unlinkable. Such implicit linking has the drawback of being static: a signature that was decided to be unlinkable to some or all other signatures, will remain unlinkable forever. Thus, use cases with even a remote probability of needing to link signatures a posteriori would require to make them all linkable by default, eliminating all privacy.

Further, relying on the more privacy-friendly option of user-controlled and implicit linkability instead of having an almighty opener, makes formally defining the desired security and privacy properties of such group signatures much more challenging. In fact, to date no satisfactory security model for DAA in the form of accessible game-based security notions is known; we refer to [6, 12] for a summary of the long line of failed security notions in that respect.

Alternatively, some existing group signatures offer user-controlled a posteriori linking or opening of previously anonymous signatures: In [29] users can claim signatures by outputting their secret key which allows to test whether a signature stemmed from that user. But this is an all-or-nothing approach, immediately destroying privacy of all the user’s signatures and thus is unsuitable for most realistic scenarios. The recent work by Krenn et al. [26] implement a more flexible explicit linking by enabling users to issue link proofs for two (or, in theory, more) signatures. However, their model still crucially relies on the presence of a trusted opener to model and prove the desired security properties. Thus, even if only explicit linking would be needed, the scheme must allow full opening through a central entity in order to fit their model and hope for any provable security guarantees.

Ideally, one would hope for group signatures supporting both implicit and explicit linking to increase utility and, for scenarios handling sensitive data, without trusted parties that can unilaterally remove privacy.

1.1 Our Contributions

In this paper we provide the first provably secure group signatures that are purely user-centric, i.e., where only the user can control the linkage of her signatures. To allow for the necessary flexibility, our solution supports both implicit and explicit linkability. That is, the user can make signatures linkable with respect to pseudonyms when she generates them, and also link signatures with different pseudonyms afterwards through explicit link proofs.

Security Model Without Opener, and for Implicit and Explicit Linking. Our first challenge was to provide meaningful security notions when no opener is available that can be leveraged, e.g., to express who is a valid member of the group. Instead, we take inspiration from security models for DAA [6, 12] to express membership of groups through linking. We define anonymity by requiring that it must not be possible to link signatures by the same user, except when she decides to make them linkable by default, or when she explicitly links them. For traceability, (1) it must not be possible to create signatures that are not traceable to any valid member of the group, and (2) it must not be possible to explicitly link signatures originating from different (possibly corrupt) users. Finally, for non-frameability we require that (1) no signature can be implicitly linkable to another honest signature unless it was honestly generated by the same user – who also made both signatures linkable by default, and (2) no adversary can explicitly link honest and dishonest signatures, or honest signatures that have not been explicitly linked by their signer. Note that we give two variants for both traceability and non-frameability. This is needed due to the possibility to implicitly and explicitly link signatures, and is a direct consequence of leveraging linkability to replace the opener. We emphasize that, to the best of our knowledge, implicit linking has not been modelled previously for group signatures – let alone in combination with explicit linking.

Sequential Link Proofs. When the pseudonymous signatures are over data with inherent order properties – e.g., time series – just re-establishing linkage is not enough. Therein, it may be needed to attest that the linked messages are given in the same order in which they were produced, and without omitting (possibly relevant) ones. For instance, smart vehicles in Intelligent Transportation Systems (ITSs) are required to send measurements to a data lake. There, the order of a sequence of events may be useful to detect anomalies: e.g., a vehicle reporting 35-45-30-40 l of fuel in a short timespan is probably an anomaly, while one reporting 45-40-35-30 is probably not. Or, again, in contact tracing systems, where pseudonyms are reused during a limited time, after which new ones are derived. Users may eventually be required to reveal their pseudonymous data spanning several of those pseudonyms, and omitting specific chunks of this data (or altering the order) may preclude effective contact tracing. In these use cases, the number of pseudonymously signed messages that may be required to be linked can be expected to be of at least many tens (and possibly a few hundreds) of signatures, in short time spans. Additionally, order may be relevant in less throughput-demanding scenarios. For instance, it may have very different implications when a person fails to pay X mortgage fees in a row, than the case when the X defaults correspond to months very distant in time.

This motivates our next contribution. We extend our previous model and construction to enable sequential link proofs: signers can prove that a sequence of signatures was produced in the specified order, and no signature is being omitted. To model this, we introduce a new unforgeability property, sequentiality, ensuring that honest-then-corrupt users cannot create sequential proofs for wrongly ordered sequences, nor omitting signatures. Our extended construction builds on efficient hash-chain ideas from anonymous payment systems [27].

Efficient Construction with Batch Proofs for Linking. We give an efficient construction realizing our model. Pseudonymous signatures are computed using the scope-exclusive nym approach from DAA and anonymous credentials, where the pseudonym is deterministically derived from a scope and the same secret key in the user’s credential. This gives implicit linkage. For explicitly linking signatures, we propose a new way to batch the signatures being linked, leveraging the fact that pseudonyms are group elements that can be “aggregated”. This leads to an efficient mechanism for linking large sets of signatures.

Implementation and Comparison. To further assess efficiency of our constructions, we implement them and report on the obtained experimental results (check Appendix A for notes on the implementation and a demo). Both the basic scheme and sequential extension outperform the most related previous work [26]: we link sets of \({\sim }100\) signatures in \({\sim }40\) ms, while [26] requires \({\sim }300\) ms for linking only 2 signatures (besides requiring a trusted opener.)

2 Preliminaries

Notation. \(\mathbb {G}= \langle g \rangle \) denotes a cyclic group \(\mathbb {G}\) generated by \(g\), \(a \leftarrow A(\cdot )\) denotes a obtained by applying algorithm A, \(a \leftarrow _\$ S\) means a is picked uniformly from set S, and \([n ]\) denotes the closed interval \([1,n ]\). \(\mathsf {H} \) and \(\mathsf {H}' \) are cryptographic hash functions. Signed messages are represented as a tuple of elements. When arguing about sets of such tuples, \(\pmb {\varSigma }\) denotes a set, and \(\varSigma _i\) the i-th element in \(\pmb {\varSigma } \). \(\pmb {\varSigma } _o\) is an ordered set, and \(A_o \in _o S_o\) denotes that \(A_o\) appears in \(S_o\), respecting order.

Bilinear Maps. Let \(\mathbb {G}_1=\langle g_1 \rangle ,\mathbb {G}_2= \langle g_2 \rangle ,\mathbb {G}_T\) be three cyclic groups of prime order p, where an efficient mapping \(e: \mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) exists. e satisfies bilinearity, i.e., \(e(g_1^x,g_2^y) = e(g_1, g_2)^{xy}\); non-degeneracy, i.e., \(e(g_1,g_2)\) generates \(\mathbb {G}_T\); and efficiency, i.e., there exists \(\mathsf {PG} (1^\tau )\) efficiently generating bilinear groups \((p,\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,g_1, g_2,e)\) as above, and computing e(ab) is efficient for any \(a \in \mathbb {G}_1,b \in \mathbb {G}_2\). Moreover, we use Type-III bilinear maps [22], i.e., \(\mathbb {G}_1 \ne \mathbb {G}_2\) and there are no efficiently computable homomorphisms between them.

Hardness Assumptions. We base the security of our scheme in the well known Discrete Logarithm and DDH assumptions [16] and in the q-SDH assumption for Type-III pairings [9], which we informally recall next.

q-SDH assumption (for Type-III pairings [9].) Given \(g_1 \in \mathbb {G}_1\), \(g_2 \in \mathbb {G}_2\), \(\chi \in \mathbb {Z}_p\), and a \((\mathbb {G}_1^{q+1},\mathbb {G}_2^2)\) tuple \((g_1,g_1^{\chi }, g_1^{(\chi ^2)},...,g_1^{(\chi ^q)},g_2,g_2^{\chi })\), it is computationally unfeasible for any polynomial-time machine to output a tuple \((g_1^{\frac{1}{x+\chi }},x) \in \mathbb {G}_1 \times \mathbb {Z}_p\setminus \lbrace - \chi \rbrace \).

BBS+ Signatures and Pseudonyms. We rely on the BBS+ signature scheme proposed in [1] for Type-II pairings, and Type-III pairings in [11].

We use the following convention for BBS+ operations, for some previously generated Type-III pairing group \((p,\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,g_1,g_2,e)\):

  • Key Generation. Compute \((h_1,h_2) \leftarrow _\$ \mathbb {G}_1^2\), \(y \leftarrow _\$ \mathbb {Z}^*_p\), \(W \leftarrow _\$ g_2^y\). Set \(sk \leftarrow y\) and \(pk \leftarrow (W,h_1,h_2)\).

  • Signing. Given a message m (assumed to be in \(\mathbb {Z}_p\), pick \(x,s \leftarrow _\$ \mathbb {Z}^*_p\) and compute \(A \leftarrow (g_1h_1^s h_2^m)^{\frac{1}{x+y}}\). The signature is the tuple (Axs).

  • Verification. Given a signature (Axs) over a message m, supposedly from \(pk = (W,h_1,h_2)\), check that \(e(A,Wg_2^x)= e(g_1h_1^sh_2^m,g_2)\).

We extend the proof of knowledge in BBS+ signatures to prove correctness of the pseudonyms that signers generate.

For pseudonyms, we follow [14]. Roughly, with the help of a hash function, pseudonyms are deterministically generated from a scope \(scp\) and a private key sk as \(\mathsf {H} (scp)^{sk}\).

Proof Protocols. We use non-interactive proofs of knowledge obtained through the Fiat-Shamir transform [21]. \(\mathsf {SPK} \lbrace (x,r): h = h_1^xh_2^r \rbrace (ctx,m)\), denotes a signature of knowledge of (xr) meeting the condition to the right of the colon, for public message \(m\), and parameters \(ctx\) to prevent malleability attacks [7]. For verification, we write \(\mathsf {SPKVerify} (\pi ,ctx,m)\), returning 1 (correct) or 0 (incorrect).

Additional Building Blocks. We rely on an append-only bulletin board \(\mathsf {BB}\) and pseudo random functions (\(\mathsf {PRF}\)s). \(\mathsf {PRF}\)s generate pseudorandom output from a secret key and arbitrary inputs. \(\mathsf {PRF.KeyGen} (1^\tau ) \rightarrow k\) generates the keys, and \(\mathsf {PRF.Eval} (k,m) \rightarrow r\) pseudorandomness r from key k and message m. The \(\mathsf {BB}\) is assumed to verify the data before writing, and written data cannot be erased.

3 Scheme with User-Controlled Linkability (\(\mathsf {UCL}\))

In this section we present our basic group signature scheme with user-controlled and selective linkability. We start by presenting the general syntax, then describe how the desired security properties can be formulated without the presence of an opening entity, and finally present our secure instantiation.

The core contribution of this section is the new security model that captures the desired security and privacy properties without a central (trusted) entity and allows for selective, user-centric linkability. The proposed scheme follows in most parts the standard approach of group signatures, integrates the pseudonym idea from DAA, and provides a new way to prove linkage of a batch of signatures.

3.1 Syntax

In group signatures, an issuer interacts with users who want to join the group and become group members. Members create anonymous signatures on behalf of the group, which verifiers can check without learning the signers’ identity. In our setting, the anonymity of the signer is steered via pseudonyms, generated with every signature, as well as explicit link proofs. More precisely, a \(\mathsf {UCL}\) scheme supports two types of linkability (see Fig. 1 for a pictorial representation):

  • Implicit Linkability: Every signature is accompanied with a pseudonym, generated by the user for a particular scope. Re-using the same scope leads to the same pseudonym, making all signatures generated for the same scope immediately linkable for the verifier. Pseudonymous signatures for different scopes cannot be linked, except via explicit link proofs generated by the user.

  • Explicit Linkability: After the signatures have been generated, they can be claimed and linked by the user: given a set of signatures, the user proves that she created all of them, i.e., links the signatures in the set.

Fig. 1.
figure 1

Implicit vs explicit linkability on signatures by a same user, controlled by the user via scopes, pseudonyms and link proofs.

We emphasize that users have full control on the scopes, which can be any arbitrary (bit)string. For instance, in the contact tracing example given in Sect. 1, where identifiers are reused during 15 min, the scope could be derived from publicly available information, such as the current epoch. Alternatively, using randomly chosen scopes would lead to unlinkable signatures.

A \(\mathsf {UCL}\) group signature scheme consists of the following algorithms:

  • \(\mathsf {Setup} (1^\tau ) \rightarrow param \): Generates the public parameters for the scheme.

  • \(\mathsf {IKGen} (param) \rightarrow (isk,ipk)\): Generates the issuer’s keypair (\(isk\),\(ipk\)).

  • \(\langle \mathsf {Join} (ipk), \mathsf {Issue} (ipk,isk) \rangle \rightarrow (usk, \bot )\): To become a member of the group, the user runs the interactive join protocol with the issuer. If successful, the user obtains a user secret key \(usk\).

  • \(\mathsf {Sign} (ipk, usk, m, scp) \rightarrow (\sigma , nym)\): Signs a message \(m\) w.r.t. scope \(scp\) via user secret key \(usk\). The output is a pseudonym \(nym\) and group signature \(\sigma \).

  • \(\mathsf {Verify} (ipk, \varSigma ) \rightarrow 0/1\): On input a group public key \(ipk \) and tuple \(\varSigma = (m,scp,\sigma ,nym)\), containing a group signature \(\sigma \) and a pseudonym \(nym\), purportedly corresponding to \(m\) and \(scp\), returns 1 when the tuple is valid and 0 otherwise.

  • \(\mathsf {Link} (ipk, usk, lm, \pmb {\varSigma }) \rightarrow \pi _l/\bot \): On input a set of signature tuples \(\pmb {\varSigma } = \lbrace \varSigma _i\rbrace _{i \in [n ]}\) and user secret key \(usk\), produces a proof \(\pi _l\) of these signatures being linked or \(\bot \) indicating failure. The link proof is also done for a specific message \(lm\), which can be used e.g., to ensure freshness of the proof.

  • \(\mathsf {VerifyLink} (ipk, lm, \pmb {\varSigma }, \pi _l) \rightarrow 0/1\): Returns 1 if \(\pi _l\) is a valid proof for the statement that \(\pmb {\varSigma } = \lbrace \varSigma _i \rbrace _{i \in [n ]}\) were produced by the same signer and for link message \(lm\), or 0 otherwise.

We delay the definition of the correctness properties for a \(\mathsf {UCL}\) scheme after introducing some extra notation in the next section.

3.2 Security Model

A \(\mathsf {UCL}\) group signature scheme should provide the following privacy and security properties: For privacy, signatures should not leak anything about the signer’s identity beyond what is exposed by the user through implicit and explicit linkability (anonymity). Security is expressed through a number of properties covering the desired unforgeability guarantees: signatures should only be created by users that have correctly joined the group (traceability), and even a corrupt issuer should not be able to impersonate honest users (non-frameability).

Oracles and State. Our definitional framework closely follows the existing work of group signatures, and in particular the work by [5] for security of dynamic schemes. They make use of a number of oracles and global variables that allow the adversary to engage with honest parties, and which we adjust to our setting.

  • \(\mathsf {ADDU}\): Runs \(\langle \mathsf {Join},\mathsf {Issue} \rangle \) between an honest user and an honest issuer, allowing the adversary to enroll honest users. The new user key is stored as \(\mathsf {USK} [\mathsf {uid} ]\).

  • \(\mathsf {SNDU}\) : (The SeND to User oracle.) Runs the \(\mathsf {Join}\) process on behalf of an honest user, against an adversarially controlled issuer. The new user key is stored as \(\mathsf {USK} [\mathsf {uid} ]\).

  • \(\mathsf {SNDI}\) : (The SeND to Issuer oracle.) Runs the \(\mathsf {Issue}\) process on behalf of an honest issuer, allowing the adversary to join in the role of corrupt users in games with an honest issuer. Updates \(\mathsf {transcript} [\mathsf {uid} ]\) with a transcript of the exchanged messages.

  • \(\mathsf {SIGN}\)/\(\mathsf {LINK}\) : Allow the adversary to obtain honest users’ signatures/link proofs for messages/signatures of his choice (with restrictions in anonymity game).

  • \(\mathsf {CH\text {-}SIGN}_b\)/\(\mathsf {CH\text {-}LINK}_b\) : Challenge oracles in the anonymity game that allow the adversary to get signatures and link proofs for a challenge user \(\mathsf {uid} _b\).

Figure 2 presents the details of the oracles used in our games: the standard \(\mathsf {ADDU}\), \(\mathsf {SNDU}\), and \(\mathsf {SNDI}\) oracles as defined in [5], and \(\mathsf {SIGN}\) and \(\mathsf {CH\text {-}SIGN}_b\), which we modify from [5], and \(\mathsf {LINK}\) and \(\mathsf {CH\text {-}LINK}_b\), which are specific to our model.

Table 1. Information stored by the global state variables.

Helper Function \(\mathsf {Identify}\). In some security games we need to determine if a certain user secret key was used to create a given signature. For this we follow DAA work [6, 12] and assume the availability of a function \(\mathsf {Identify} (ipk,usk,\varSigma ) \rightarrow 0/1\), returning 1 when \(\varSigma =(m,scp,\sigma ,nym)\) was produced by \(usk \), or 0 otherwise.

We assume the following behaviour of \(\mathsf {Identify} \): for all \((isk,ipk) \leftarrow \mathsf {IKGen} (param)\), and all \(\varSigma =(m,scp,\sigma ,nym)\) where \(\mathsf {Verify} (ipk, \varSigma ) = 1\) there must exist exactly one \(usk \) (from the user secret key space induced by \(\langle \mathsf {Join} (ipk), \mathsf {Issue} (ipk,isk) \rangle \)) such that \(\mathsf {Identify} (ipk,usk,\varSigma ) = 1\).

We use the function for keys of both honest and corrupt users. Abusing notation, we write \(\mathsf {Identify} (\mathsf {uid},\varSigma )\) to indicate that \(\mathsf {Identify} \) is run for the secret key \(usk \) of user \(\mathsf {uid} \) (where \(ipk \) is clear from the context). For honest users, \(\mathsf {Identify}\) simply uses \(\mathsf {USK} [\mathsf {uid} ]\); while keys of corrupt users can be extracted from the join transcript. For the latter, note that \(\mathsf {Identify}\) is only used in games where the issuer is honest, i.e., such a transcript is available. In our concrete scheme we exploit the random oracle to extract a user’s keys via rewinding. If online-extractable proofs are used, then \(\mathsf {Identify}\) will also receive the trapdoor information as input.

Fig. 2.
figure 2

Detailed oracles available in our model.

We now formally capture the expected security properties.

Correctness. We formalize the correctness of \(\mathsf {Sign}\) and correctness of \(\mathsf {Link}\) properties in the full version [19].

Anonymity. We adapt the classic privacy notion to our setting. It expresses that signatures must not reveal anything about the signer’s identity beyond what was intended by her, even when the issuer is corrupt. The adversary plays the role of the issuer and can trigger honest users to join, sign and link. Eventually, he chooses two honest users \(\mathsf {uid} ^* _0\) and \(\mathsf {uid} ^* _1\), and one becomes the challenge user \(\mathsf {uid} ^* _b\). The adversary can receive signatures and link proofs of \(\mathsf {uid} ^* _b\) (via \(\mathsf {CH\text {-}SIGN}_b\) and \(\mathsf {CH\text {-}LINK}_b\)) and must determine b better than by random guessing.

As our signatures support user-controlled linkability, we must be careful to exclude trivial wins leveraging it. There are two ways in which the adversary can trivially win. First, by leveraging implicit linkability: signatures by the same user and with the same scope are directly linkable. The adversary could exploit this by calling \(\mathsf {CH\text {-}SIGN}_b\) and \(\mathsf {SIGN}\) (the latter, for \(\mathsf {uid} ^* _0\) or \(\mathsf {uid} ^* _1\)) with the same scope. Second, the adversary can leverage explicit linkability by obtaining link proofs via \(\mathsf {LINK}\) or \(\mathsf {CH\text {-}LINK}_b\) for a set of signatures that contains challenge signatures, obtained though \(\mathsf {CH\text {-}SIGN}_b\), and non-challenge signatures (for a challenge user), obtained from \(\mathsf {SIGN}\).

Definition 1

(Anonymity). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability is anonymous if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {anon\text {-}}1}(\tau ) = 1 ]- \Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {anon\text {-}}0}(\tau ) = 1 ]|\).

figure a

Traceability. This property covers the desired unforgeability guarantees for corrupt users of groups with an honest issuer. Intuitively, it guarantees that only legitimate members of the group are able to generate valid signatures on behalf of that group. The traditional approach in group signature models [5, 26] is to ask the adversary for a forgery and leverage the trusted opener to check whether the forged signature opens to any user that has joined the group.

As our setting does not have such an opening entity, we cannot follow this approach and instead take inspiration from the DAA security models [6, 12]. Therein, one uses the implicit availability of an \(\mathsf {Identify}\) function (introduced above) which allows to check whether a given signature belongs to a certain user secret key (which we know from honest users, and can extract from corrupt ones). The adversary wins if he can produce valid signatures (or link proofs) that cannot be traced back via \(\mathsf {Identify}\) to any member of the group. This alone would not be sufficient though, as our signatures also carry some information in their implicit and explicit linkability, which an adversary should not be able to manipulate either. That is, the adversary also wins if he can produce more standalone signatures that are unlinkable (for the same scope) than he controls corrupt users, or if he manages to produce a valid link proof for signatures of different corrupt users.

We have grouped these properties along the statement that the adversary has to forge, i.e., we have signature traceability for forgeries of standalone signatures, and link traceability that works analogously for the link proofs.

Definition 2

(Signature Traceability). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability provides signature traceability if for all ppt adversaries \(\mathcal {A}\), \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {sign\text {-}trace}}(\tau ) = 1 ]|\) is negligible in \(\tau \).

Definition 3

(Link Traceability). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability provides link traceability if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {link\text {-}trace}}(\tau ) = 1 ]|\).

figure b
figure c

Non-frameability. This property guarantees that an honest user cannot be framed by the adversary, even when the issuer is corrupt. In our setting such framing can be done when signatures of an honest user are linkable to signatures that she has not generated. As we support two different types of linkability, we again need a dedicated variant of that property for each of them. The first captures non-frameability from standalone signatures, i.e., via implicit linking. In this case, the adversary can only frame an honest user by producing a signature that holds for the same pseudonym that an honest signature generated for that scope. Linkability (and thus framing attacks) across scopes is not possible and thus does not have to be considered here. Such linkage for different scopes is only possible via explicit link proofs. The second property we define captures non-frameability for these proofs, which the adversary can leverage to frame an honest user in two ways: producing a proof that (1) links honestly generated signatures with adversarial ones; or (2) producing a proof that links honestly generated signatures by the same user, but the honest user did not create that proof – i.e., it is the proof itself that is forged and aims to impersonate the honest user.

Definition 4

(Signature Non-frameability). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability is secure against signature framing if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {sign\text {-}frame}}(\tau ) = 1 ]|\).

Definition 5

(Link Non-frameability). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability is secure against link framing if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {UCL}}^{\mathsf {link\text {-}frame}}(\tau ) = 1 ]|\).

figure d
figure e

Definition 6

(Security of \(\mathsf {UCL}\)). A group signature scheme \(\mathsf {UCL}\) with user-controlled linkability is secure if it ensures the previous anonymity, traceability and non-frameability properties.

3.3 Construction

We now present our scheme satisfying the desired security and privacy properties. The core of our constructions follows the standard approach of group signatures (see, e.g., [8]): during join, users receive from the issuer a membership credential, and signing essentially is a proof of knowledge of such a credential. We use BBS+ signatures for such blindly issued membership credentials.

Adding implicit linkability: Whereas standard group signatures usually include an encryption of the user’s identity (for opening) in her signature, we use the pseudonym idea of DAA and anonymous credentials instead [6, 12, 14] and, specifically, of [11]. That is, when creating a signature, the user also reveals a pseudonym \(nym \leftarrow \mathsf {H} (scp)^y\) for her key y and a particular scope \(scp \). Clearly, these pseudonyms are scope-exclusive, i.e., there is only one valid pseudonym per scope and user key [14]. The user also proves that she has computed the pseudonym from her key.

Adding explicit linkability: The existing solution for link proofs [14, 26] of signatures with different pseudonyms is to let the user provide a fresh proof that all pseudonyms are all based on the same user key. So far, this approach has been proposed for linking only two signatures, and will grow linearly when being used for many signatures. For our proofs, we instead use the observation that all individual pseudonyms the signatures are associated to can form a “meta-nym” \(\overline{nym} = \prod _{i \in [n ]} nym _i = \prod _{i \in [n ]} \mathsf {H} (scp _i)^y\). That is, the user can simply prove that she knows the secret key y such that \(\overline{nym} \leftarrow \overline{hscp} ^y\), where \(\overline{nym} \) and \(\overline{hscp} =\prod _{i \in [n ]} \mathsf {H} (scp _i)\) are uniquely determined by the signatures.

We stress that we do not claim novelty of the main parts of the group signatures. The core contribution here is (1) the simple trick for making efficient batched link proofs, and (2) making the pseudonym idea of credentials and DAA also formally available for group signatures.

Our Construction \({\varPi _\mathsf {UCL}} \). Our concrete construction works as follows:

\(\mathsf {Setup} (1^\tau ) \rightarrow param \). Generates a bilinear group \((p,\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,g_1, g_2,e) \leftarrow \mathsf {PG} (1^\tau )\) and two further generators \(h_1, h_2 \in \mathbb {G}_1\) (for the BBS+ credentials).

\(\mathsf {IKGen} (param) \rightarrow (isk,ipk)\). Outputs \(isk \leftarrow _\$ \mathbb {Z}_p^*\) and \(ipk \leftarrow g_2^{isk}\).

\(\langle \mathsf {Join} (ipk), \mathsf {Issue} (ipk,isk) \rangle \rightarrow (usk, \bot )\). This interactive protocol lets the user blindly obtain a BBS+ signature by the issuer on her secret key y:

  • Issuer: sends a random nonce \(n \leftarrow \mathbb {Z}^*_p\) to the user.

  • User: \(y \leftarrow _\$ \mathbb {Z}^*_p, Y \leftarrow h_1^y\) , \(\pi _Y \leftarrow \mathsf {SPK} \lbrace (y) : Y \leftarrow h_1^y\rbrace ((param,h_1,Y),n)\). Sends \((Y,\pi _Y)\) back to the issuer.

  • Issuer: Only proceeds if \(\pi _Y\) is valid. Computes BBS+ signature on y as \(x,s \leftarrow _\$ \mathbb {Z}^*_p\), \(A \leftarrow (Yh_2^s g_1)^{1/(isk +x)}\). Sends (Axs) to user.

  • User: If \(A \ne 1_{\mathbb {G}_1}, e(A,g_2)^xe(A,ipk) = e(g_1Yh_2^s,g_2)\) outputs \(usk \leftarrow (A,x,y,s)\).

\(\mathsf {Sign} (ipk,usk,m,scp) \rightarrow (\sigma , nym)\). To sign a message \(m\) for scope \(scp \), the user generates the pseudonym \(nym \leftarrow \mathsf {H} (scp)^y\) and computes a proof that the pseudonym was computed for a key that she has a BBS+ credential on, including the message m in the Fiat-Shamir hash of the proof.

  • Parse \(usk \) as (Axys).

  • Compute the pseudonym as: \(nym \leftarrow \mathsf {H} (scp)^y\).

  • Re-randomize the BBS+ credential as \(r_1,r_2 \leftarrow _\$ \mathbb {Z}_p^*\), \(r_3 \leftarrow r_1^{-1}\) and \(s' \leftarrow s - r_2r_3\), \(A' \leftarrow A^{r_1}\), \(\hat{A} \leftarrow (A')^{-x}(g_1h_1^yh_2^s)^{r_1}\), \(d \leftarrow (g_1h_1^yh_2^s)^{r_1}h_2^{-r_2}\).

  • Compute \(\pi _\sigma \leftarrow \mathsf {SPK} \lbrace (x, y, r_2, r_3, s'): nym = \mathsf {H} (scp)^y \wedge \)

                                          \(\hat{A}/d = (A')^{-x}h_2^{r_2} g_1h_1^y = d^{r_3}h_2^{-s'} \rbrace (ctx,m)\) for \(ctx \leftarrow (param,A',\hat{A},d,nym)\).

  • \(\sigma \leftarrow (A',\hat{A},d,\pi _{\sigma })\). Return \((\sigma ,nym)\).

\(\mathsf {Verify} (ipk,\varSigma )\). Parses \(\sigma \) in \(\varSigma \) as \((A',\hat{A},d,\pi _{\sigma })\), checks that \(A' \ne 1_{\mathbb {G}_1}\), \(e(A',ipk) = e(\hat{A},g_2)\),and outputs 1 if the \(\mathsf {SPK} \) in \(\varSigma \) is valid for message m and scope \(scp \).

\(\mathsf {Link} (ipk,lm,\pmb {\varSigma }) \rightarrow \pi _l/\bot \). Linking signatures is done by batching all \(nym\)s and scopes into \(\overline{nym}\) and \(\overline{hscp}\), and proving knowledge of the discrete logarithm of \(\overline{nym}\) w.r.t. \(\overline{hscp}\). The link message \(lm\) is included in the hash of the proof.

  • Parse \(usk \) as (Axys), and \(\pmb {\varSigma } \) as \(\lbrace \varSigma _i = (m _i,scp _i,\sigma _i,nym _i) \rbrace _{i \in [n ]}\).

  • If \(\exists i \in [n ]~\text {s.t.}~\mathsf {H} (scp _i)^y \ne nym _i \), or \(\mathsf {Verify} (ipk,\varSigma _i) = 0\), return \(\bot \).

  • Set \(ctx \leftarrow (param,\lbrace scp _i \rbrace _{i \in [n ]}, \lbrace nym _i \rbrace _{i \in [n ]})\).

  • Compute \(\overline{hscp} \leftarrow \prod _{i \in [n ]} \mathsf {H} (scp _i)\) and \(\overline{nym} \leftarrow \overline{hscp} ^y\).

  • Output \(\pi _{l} \leftarrow \mathsf {SPK} \lbrace (y) : \overline{nym} = \overline{hscp} ^y \rbrace (ctx,lm)\).

\(\mathsf {VerifyLink} (ipk,lm,\pmb {\varSigma },\pi _{l}) \rightarrow 0/1\). The verifier recomputes the meta-scope \(\overline{hscp} \) and meta-nym \(\overline{nym} \) from the individual signatures, verifies all signatures and \(\pi _{l}\):

  • Parse \(\pmb {\varSigma } \) as \(\lbrace \varSigma _i = (m _i,scp _i,\sigma _i,nym _i) \rbrace _{i \in [n ]}\).

  • If \(\exists i \in [n ]~\text {s.t.}~\mathsf {Verify} (ipk,\varSigma _i) = 0\), return 0.

  • If \(\exists i \ne j \in [n ]~\text {s.t.}~scp _i=scp _j \wedge nym _i \ne nym _j\), return 0.

  • \(\overline{hscp} =\prod _{i \in [n ]} \mathsf {H} (scp _i)\), \(\overline{nym} =\prod _{i \in [n ]} nym _i\).

  • Output result of verifying \(\pi _{l} \) for \(\overline{hscp} \) and \(\overline{nym} \).

3.3.1 Security of Our Construction

Theorem 1

Assuming \(\mathsf {SPK}\) is zero-knowledge and simulation-sound, our construction is secure under the discrete logarithm, DDH, and q-SDH assumptions, in the random oracle model for \(\mathsf {H}\) and \(\mathsf {SPK}\).

Proof Sketch. Under the DDH assumption [28], anonymity follows from zero-knowledgeness and simulation-soundness of the \(\mathsf {SPK}\)s, and the fact that pseudonyms are indistinguishable from random when different scopes are used.

We realize \(\mathsf {Identify}\) with the help of the pseudonyms. Given a signature \((m,scp,\sigma ,nym)\), \(\mathsf {Identify}\) fetches y from the \(usk\) of the specified \(\mathsf {uid}\) and, if \(\mathsf {H} (scp)^y=nym \), returns 1; else, returns 0. Scope-exclusiveness of pseudonyms ensures the required uniqueness [14]. Then, signature traceability follows from unforgeability of the BBS+ credentials, and zero-knowledgeness and soundness of \(\mathsf {SPK}\): if the adversary produces, for the same scope, more unlinkable signatures than corrupt users, or a signature from a non-member, we extract a forged BBS+ credential and can break the q-SDH assumption [11]. Winning condition 1 of link traceability is shown similarly. For condition 2, soundness of \(\mathsf {SPK}\) ensures the individual signatures and the link proof are valid discrete logarithm proofs. Also, after the uniqueness property of pseudonyms, no two nyms in the same link proof can have different values if derived from the same \(scp\). This prevents malleability attacks: e.g., corrupt users joining with \(y=a\) and \(y=b-a\) and using nyms derived from those keys and the same \(scp\) in the same link proof. Thus, an adversary can only try to subvert the proof with nyms derived from different scopes. But this requires to find non-trivial roots in an equation of the form \(g^{\alpha _1y_1}...g^{\alpha _ny_n}=1\), where the \(y_i\)’s are controlled by the adversary, but the \(\alpha _i\)’s are not, as the \(g^{\alpha _i}\)’s are produced by \(\mathsf {H}\) (a random oracle). We show that a successful adversary can be used to break the discrete logarithm assumption.

For signature non-frameability, we rely on the uniqueness property of the pseudonyms and zero-knowledgeness and soundness of \(\mathsf {SPK}\). We break the discrete logarithm assumption from an adversary forging a signature with the same scope and nym that a signature of an honest user. For link non-frameability, we rely on the zero-knowledgeness and soundness of \(\mathsf {SPK}\). First, a similar argument as in traceability ensures that the link proof must be over the same exponents. We leverage this to embed a DL challenge into the nyms and link proofs of an honest user. If the adversary forges a signature (for winning condition 1) or a link proof (winning condition 2) for this user, we can extract a solution to the challenge.

The full proofs are given in the full version of this work [19].

3.3.2 Leveraging a Trusted Bulletin Board

Our \(\mathsf {UCL} \) group signatures target a setting where signatures are generated and collected in a pseudonymous manner, and where linkability can still be refined later on by the users. Such a setting implicitly assumes the storage and availability of the originally exposed group signatures, e.g., in form of a central data lake that collects all individual signatures. In applications where the data lake is trusted by the verifiers (or even maintained by them), we can leverage this to improve the efficiency of our scheme. For clarity, we refer to such a trusted data lake and the additional functionality it must provide as bulletin board \((\mathsf {BB})\), which can be used as follows:

  • All signatures \(\varSigma _i\) are sent to the \(\mathsf {BB} \), who verifies and appends them, if valid.

  • \(\mathsf {Link}\) and \(\mathsf {VerifyLink}\) no longer check the validity of all \(\varSigma _i\) in \(\pmb {\varSigma } \), but simply check whether all signatures are in the \(\mathsf {BB} \).

By using such a trusted \(\mathsf {BB} \) we can improve the efficiency of \(\mathsf {Link}\) and \(\mathsf {VerifyLink}\) significantly – of course for the price of trusting a central entity again. This trust assumption would be necessary for the anonymity, link traceability and link non-frameability properties. However, the functionality of the \(\mathsf {BB} \) can easily be distributed, e.g., using a blockchain; or the trust enforced and verified via regular audits where verifiers randomly pick signatures in the \(\mathsf {BB} \) and check their validity. Thus, we believe that such a trust assumption is much more relaxed than trusting an entity that can single-handedly revoke the anonymity of all users.

Requirements on long-term storage capacity of the bulletin board depend on the use case. However, it seems reasonable to assume that, for most real world settings, a maximum timespan for storing past signatures can be established.

4 Scheme with Sequential Linkability (\(\mathsf {sUCL}\))

We extend our basic \(\mathsf {UCL}\) scheme to allow for sequential link proofs. These sequential proofs target a setting where the originally signed (and unlinkable) data has an inherent order, e.g., time series data when sensors or vehicles continuously upload their measurements into a data lake. While the data is collected in unlinkable form, the eventual subsequent link proof must re-establish not only the correlation but also the order of a selected subset in an immutable manner.

We start by describing the minor syntax changes needed for our sequential group signatures (\(\mathsf {sUCL}\)), and then discuss the additional security property we want such a \(\mathsf {sUCL}\) scheme to achieve. Roughly, when making a sequential link proof, a corrupt user should not be able to swap, omit or insert signatures within the selected interval – and yet, this proves, nor reveals, nothing about signatures outside the proven interval. For this sequentiality property, we consider security against honest-then-corrupt users. While this may seem too lenient, note that it fits many real world applications where signing is an automatic process performed in the background by some device or application. In those cases, the need to alter sequences will only arise after the signatures have been created and sent. But, as described, the produced signatures – which contain extra information to enable proving order – are assumed to be stored in a data lake. Then, eventually, users have to make some claim that involves proving order with respect to those previously stored signatures. But this limits the options of malicious users. E.g., assume signatures \(\varSigma _1, \varSigma _2\) and \(\varSigma _3\) are produced in that order (i.e., first \(\varSigma _1\), then \(\varSigma _2\) and finally \(\varSigma _3\)), but a malicious user \(\mathcal {A}\) wants to prove the reverse order. Then, \(\mathcal {A}\) needs to commit to that strategy before sending the signatures by consequently altering the order information embedded in the signatures. Our argument is that, in many real world cases, \(\mathcal {A}\) will not know which order he will be interested to prove in the future. For instance, in a contact tracing scenario (for a pandemic), malicious users will not know what order they are interested to prove until after learning which has been the risky contact.

Moreover, which specific alteration might be needed would also depend on the originally produced (and signed) data, and uninformed/random alterations may very well be useless or even counterproductive for the purposes of a malicious user. Nevertheless, even modeling this weak property requires a non-trivial approach. In Sect. 6, we give some insight about what seems to be possible beyond the honest-then-corrupt approach.

Finally, we present a simple extension to our \({\varPi _\mathsf {UCL}}\) scheme that uses the trusted bulletin board sketched in Sect. 3.3.2 and includes a hidden hash-chain into the group signatures, which allows to re-establish the order of signatures.

Syntax of \(\mathsf {sUCL}\). The signatures—despite being unlinkable per se—must now have an implicit order that can be recovered and verified through \(\mathsf {SLink} \) and \(\mathsf {VerifySLink} \) respectively. Abusing notation, we consider the set of signatures \(\pmb {\varSigma } _o\) to be given as an ordered set, and the proof and verification is done with respect to. this order. Further, to allow signatures to have an implicit order, we need to turn \(\mathsf {SSign} \) into a stateful algorithm. That is, in addition to the standard input, it also receives a state \(st \) and outputs an updated state \(st '\). We model that the state is initially set together with \(usk \) during the \(\mathsf {Join} \) protocol. In summary, a \(\mathsf {sUCL}\) scheme follows the \(\mathsf {UCL}\) syntax from Sect. 3.1 with the following modifications:

  • \(\langle \mathsf {Join} (ipk), \mathsf {Issue} (ipk,isk) \rangle \rightarrow ((usk,st), \bot )\): Initializes user state \(st \).

  • \(\mathsf {SSign} (ipk, usk, st, m, scp) \rightarrow ((\tilde{\sigma }, nym), st ')\): Stateful sign algorithm.

  • \(\mathsf {SLink} (ipk, usk, lm, \pmb {\varSigma } _o) \rightarrow \pi _{seq}/\bot \): Sequential link proof for the ordered set \(\pmb {\varSigma } _o\).

  • \(\mathsf {VerifySLink} (ipk, lm, \pmb {\varSigma } _o, \pi _{seq}) \rightarrow 0/1\): Verifies \(\pi _{seq} \) w.r.t. the order in \(\pmb {\varSigma } _o\).

4.1 Security Model for \(\mathsf {sUCL}\)

We want the \(\mathsf {sUCL} \) scheme to have (essentially) the same traceability, non-frameability and anonymity properties as in Sect. 3.2—and additionally guarantee the correctness and security of the re-established sequential order.

Traceability and Non-frameability. These properties cover the security expected through the controlled linkage (not order) and only need minor adjustments to cater for the changed syntax: In the games, we use \(\mathsf {SSIGN}\)/\(\mathsf {SLINK}\) instead of \(\mathsf {SIGN}\)/\(\mathsf {LINK}\).

4.1.1 Sequentiality

This property captures the security we can expect from proofs that reveal the sequential order of several signatures issued by a same user. Namely, when a user makes a sequential link proof for an ordered set \(\pmb {\varSigma } _o = \varSigma _1, \dots , \varSigma _n\), we want to ensure that \(\varSigma _1, \dots , \varSigma _n\) have occurred indeed in that order and that no signature is omitted or inserted. The latter prevents attacks where a corrupt user tries to “hide” or add certain signatures, e.g., when a driver is asked to reveal the speed measurements from a certain time interval and wants to omit the moment she was speeding.

We follow the classic unforgeability style of definition and ask the adversary to output a forged link proof with an incorrect sequence. Clearly, such a definition needs to be able to capture what the “right order” of signatures is, in order to quantify whether a forgery violates that order or not. To do so, we opted for a two-stage game where the adversary can engage with honest users and make them sign (and link) messages of his choice. This ensures that we know the correct order in which the signatures are generated. Eventually, the adversary picks one of the honest users \(\mathsf {uid} ^* \), upon which \(\mathsf {uid} ^* \) becomes corrupted and the adversary receives her secret key and current state. The adversary wins if he outputs a valid sequential link proof that violates the sequence produced by the originally honest user, e.g., re-orders, omits or inserts signatures.

Clearly we must allow the adversary to possibly include maliciously generated signatures in his forgery, but must be careful to avoid trivial wins: as soon as we give the adversary the secret key of \(\mathsf {uid} ^* \) he can trivially (re-)generate signatures on behalf of the honest user. Thus, we ask the adversary to commit to a set of maliciously generated signatures \(\pmb {\varSigma } '\) before corrupting \(\mathsf {uid} ^* \) and request that his link forgery for alleged ordered signatures \(\pmb {\varSigma } ^*\) must be a subset of \(\pmb {\varSigma } ' \cup \mathsf {SIG} [\mathsf {uid} ^* ]\).

Definition 7

(Sequentiality). A group signature scheme \(\mathsf {sUCL}\) with user-controlled sequential linkability ensures sequentiality if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {sUCL}}^{\mathsf {sequential}}(\tau ) = 1 ]|\).

figure f

4.1.2 Anonymity

In the basic scheme (\(\mathsf {UCL}\)), we defined anonymity with the typical approach: the adversary first picks two honest users and must then guess which one is used to produce challenge signatures and link proofs. In \(\mathsf {UCL}\), we just needed to prevent the adversary from leveraging implicit linkability and explicit linkability. This boils down to not allowing the reuse of scopes between calls to \(\mathsf {CH\text {-}SIGN}_b\) and \(\mathsf {SIGN}\) (for challenge users), and not allowing to link signatures produced by \(\mathsf {CH\text {-}SIGN}_b\) and \(\mathsf {SIGN}\) (again, for challenge users).

In the sequential extension (\(\mathsf {sUCL}\)), the idea is still the same, i.e., the adversary has to guess which is the chosen challenge user out of the two he picked up. However, the adversary has more ways to trivially learn the challenge user by leveraging the order information unavoidably revealed by the sequential link queries. Take, for instance, the scenario sketched in Fig. 3. There, the adversary interleaves a call to \(\mathsf {CH\text {-}SSIGN}_b\) (the one producing \(\varSigma ^*_1\)) between calls to \(\mathsf {SSIGN}\) for the same challenge user (the call that produces \(\varSigma _2\) and the calls producing \(\varSigma _3\)\(\varSigma _5\)). If the adversary makes a call to \(\mathsf {SLINK}\) with the signatures produced before and after the call to \(\mathsf {CH\text {-}SSIGN}_b\) (e.g., including \(\varSigma _2,\varSigma _3\) in Fig. 3) and the call fails, then the challenge user is the same as the one used in the calls to \(\mathsf {SSIGN}\). Indeed, the link call fails because one signature is missing in the sequence (and, in Fig. 3 the correct sequence would be the dashed one). Similarly, if the call succeeds, then the challenge user is not the one used in the calls to \(\mathsf {SSIGN}\) (and the correct sequence in Fig. 3 is the solid one). Note that this works even when the scopes in all signatures are different: hence, it would not constitute a disallowed action in the \(\mathsf {UCL}\) model. A similar strategy interleaving a call to \(\mathsf {SSIGN}\) between calls to \(\mathsf {CH\text {-}SSIGN}_b\) also applies.

Fig. 3.
figure 3

Sketch of a strategy leading to a trivial win by \(\mathcal {A}\) leveraging order information in \(\mathsf {sUCL}\), and the model to detect it.

Oracles and State. In the previous example, we saw that calls to \(\mathsf {CH\text {-}SSIGN}_b\) and \(\mathsf {SSIGN}\) (the latter for \(\mathsf {uid} ^* _0\) or \(\mathsf {uid} ^* _1\)) can later be used to (trivially) expose the challenge user – by linking signatures produced before those calls, with signatures produced after. However, linking signatures produced within the same interval of such calls should not leak any information about the challenge user. To capture those intervals, we assign every honestly generated signature to a cluster (set of signatures). Since the calls to \(\mathsf {CH\text {-}SSIGN}_b\) and \(\mathsf {SSIGN}\) are the events defining the linkage of which signatures would lead to trivial wins, we use those calls to mark when we need to start assigning signatures to a new cluster.

More specifically, to keep track of the cluster to which we need to assign signatures by challenge users, we resort to two counters: \(i_{\mathsf {\mathsf {SIG}}^*}\) and \(i_\mathsf {CSIG} \). Every time the adversary makes a call to \(\mathsf {CH\text {-}SSIGN}_b\), we dump all signatures produced by \(\mathsf {SSIGN} (\mathsf {uid} ^* _b,\dots )\) since the last call to \(\mathsf {CH\text {-}SSIGN}_b\) to a new cluster \(\mathsf {\mathsf {SIG}}^* [\mathsf {uid} ^* _b,i_{\mathsf {\mathsf {SIG}}^*} ]\), and increment \(i_{\mathsf {\mathsf {SIG}}^*} \). Similarly, when a call to \(\mathsf {SSIGN} (\mathsf {uid} ^* _b,\dots )\) is made, we increment \(i_\mathsf {CSIG} \) so that all signatures produced by \(\mathsf {CH\text {-}SSIGN}_b\) from that point onwards start being assigned to a new cluster \(\mathsf {CSIG} [i_\mathsf {CSIG} ]\).

In the example in Fig. 3, this restricts the adversary to making \(\mathsf {SLINK}\) queries containing signatures in either \(\mathsf {\mathsf {SIG}}^* [\mathsf {uid} ^* _0,i_{\mathsf {\mathsf {SIG}}^*} ]\), \(\mathsf {CSIG} [i_\mathsf {CSIG} ]\), \(\mathsf {\mathsf {SIG}}^* [\mathsf {uid} ^* _0,i_{\mathsf {\mathsf {SIG}}^*} +1 ]\), or \(\mathsf {CSIG} [i_\mathsf {CSIG} +1 ]\), but not of any combination of (subsets of) those clusters.

The oracles used to model \(\mathsf {sUCL}\) are summarized next and fully defined in Fig. 4. The state variables are summarized in Table 2. We emphasize that the new modifications only affect the anonymity property, while the other properties just need to adjust for the updated syntax.

Table 2. New/modified global state variables in the sequential \(\mathsf {UCL}\) scheme.
Fig. 4.
figure 4

Modified versions of the \(\mathsf {SIGN}\), \(\mathsf {SLINK}\), \(\mathsf {CH\text {-}SIGN}_b\) and \(\mathsf {CH\text {-}LINK}_b\) oracles.

  • \(\mathsf {SSIGN}\)/\(\mathsf {SLINK}\) extend \(\mathsf {SIGN}\)/\(\mathsf {LINK}\). \(\mathsf {SSIGN}\) uses \(st _\mathsf {uid} \), the state of user \(\mathsf {uid}\), to call \(\mathsf {SSign}\), and updates it with the returned \(st '_\mathsf {uid} \). \(\mathsf {SLINK}\) gets an ordered set.

  • \(\mathsf {CH\text {-}SSIGN}_b\)/\(\mathsf {CH\text {-}SLINK}_b\). Challenge oracles for the anonymity game, allowing the adversary to get signatures and link proofs for the challenge user.

Helper Function \(\mathsf {Adjacent}\). We rely on a helper function, \(\mathsf {Adjacent} (\mathsf {LNK} [\mathsf {uid} ],\mathsf {CLNK}) \rightarrow 0/1\). It explores \(\mathsf {LNK}\) to check link queries for honest signatures and \(\mathsf {CLNK}\) to check link queries for challenge signatures. It returns 1 if \(\mathsf {SLINK}\) and \(\mathsf {CH\text {-}SLINK}_b\) have been respectively queried with two sets of signatures that were sequentially generated, or 0 otherwise. This is an artifact of our specific construction rather than a general requirement, though. In \(\varPi _\mathsf {sUCL}\), given two adjacent signatures \(\varSigma _n,\varSigma _{n+1}\), if \(\varSigma _n\) is included in a link proof and \(\varSigma _{n+1}\) in another link proof, it is possible to determine that they were sequentially issued. Consequently, if one is a challenge signature and the other is not, it would be possible to trivially guess the bit b in the anonymity game. The \(\mathsf {Adjacent}\) function is defined in Fig. 5.

Fig. 5.
figure 5

Definition of the helper function \(\mathsf {Adjacent}\).

Anonymity Definition. Beyond the cumbersome changes required to prevent the new trivial wins, and the extra \(\mathsf {Adjacent}\) check required by our specific construction, we capture anonymity in \(\mathsf {sUCL}\) as in \(\mathsf {UCL}\). Specifically, the adversary controls the issuer and allows users to join, sign and link signatures. He chooses a pair of honest users, one of which is randomly picked to initialize the challenge oracles. Eventually, the adversary needs to guess which one of the users was chosen, task for which he can query again the oracles, subject to the restrictions described above. The formal definition is given next.

Definition 8

(Anonymity). A group signature scheme \(\mathsf {sUCL}\) with user-controlled sequential linkability ensures anonymity if for all ppt adversaries \(\mathcal {A}\), the following is negligible in \(\tau \): \(|\Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {sUCL}}^{\mathsf {sanon\text {-}}1}(\tau ) = 1 ]- \Pr [\mathbf{Exp} _{\mathcal {A},\mathsf {sUCL}}^{\mathsf {sanon\text {-}}0}(\tau ) = 1 ]|\).

figure g

4.2 Sequential Construction

We describe how we add such sequential behaviour to \({\varPi _\mathsf {UCL}} \) while preserving the desired anonymity. Recall that signatures must remain unlinkable and not reveal user-specific order (such as being the 5-th signature of some user). The order is only guaranteed and re-established for the subset of signatures linked via \(\mathsf {SLink} \).

Adding Order Information. Our construction leverages well known hash-chain structures [27]. Roughly, every i-th signature is extended with information linking it to the \((i-1)\)-th signature by the same user. For this, we use pseudorandom numbers. First, \(x_i\) is generated for the i-th signature, and combined with \(x_{i-1}\), from the previous signature, by computing \(\mathsf {H} (x_{i} \oplus x_{i-1})\). The result of this hash and \(\mathsf {H} (x_i)\) are added to the signature. In sequential link proofs, besides the basic link proof, the signer reveals the \(x_i\)’s of all the signatures in the sequence.

Trusting an Append-only Bulletin Board \(\mathsf {BB}\). In our sequential scheme construction, the \(\mathsf {BB}\) is required. It now also checks that the commitments to the pseudorandom numbers specified above are unique across all the uploaded signatures: this is critical to prevent malleable sequences. Also, being append-only prevents removing signatures once added, avoiding tampering with order.

Our Construction \(\varPi _\mathsf {sUCL} \). For brevity, we only describe the modified functions.

\(\langle \mathsf {Join} (ipk), \mathsf {Issue} (ipk,isk) \rangle \rightarrow ((usk,st), \bot )\). Operates as in \({\varPi _\mathsf {UCL}} \), but the user adds \(k \leftarrow \mathsf {PRF.KeyGen} (\tau )\) to her \(usk\) and sets \(st \leftarrow 1\).

\(\mathsf {SSign} (ipk, usk, st, m, scp) \rightarrow ((\tilde{\sigma }, nym), st ')\). Computes \((\sigma ,nym)\) as in \({\varPi _\mathsf {UCL}}\).\(\mathsf {Sign}\) and extends \(\sigma \) with the anonymous sequence \(seq \) using the key k and state \(st \):

  • Parse \(usk \) as (Axysk) and compute \((\sigma ,nym)\) as in \(\mathsf {Sign}\).

  • Compute \(n _{st} \leftarrow \mathsf {PRF.Eval} (k,0||st), ~n _{st-1} \leftarrow \mathsf {PRF.Eval} (k, 0|| st-1)\).

  • Compute \(x_{st} \leftarrow \mathsf {PRF.Eval} (k,1||n _{st})\),  \(x_{st-1} \leftarrow \mathsf {PRF.Eval} (k, 1|| n _{st-1})\).

  • Compute \(seq _1 \leftarrow \mathsf {H}' (x_{st}), seq _2 \leftarrow \mathsf {H}' (x_{st} \oplus x_{st-1}), seq _3 \leftarrow n _{st}\).

  • Set \(seq \leftarrow (seq _1,seq _2,seq _3)\), \(st \leftarrow st +1\).

  • Return \((((\sigma ,seq),nym),st)\).

The signatures in our construction are required to be uploaded to the bulletin board \(\mathsf {BB}\). The entity responsible to do so may depend on the use case. \(\mathsf {BB}\) verifies \((m,scp,(\sigma ,(seq _1,seq _2,seq _3)),nym)\) and checks uniqueness of \(seq\), rejecting the signature if either check fails. Uniqueness of \(seq\) ensures that no \(\varSigma '=(\cdot ,\cdot ,(\cdot ,(seq '_1,seq '_2,\cdot )), \cdot )\) exists in \(\mathsf {BB}\), such that \(seq _1 = seq '_1\) or \(seq _2 = seq '_2\).

\(\mathsf {SLink} (ipk, usk, lm, \pmb {\varSigma } _o) \rightarrow \pi _{seq}/\bot \). Sequential link proofs are computed as previous link proofs, but adding to the proof the commitment openings. Namely:

  • Parse \(usk \) as (Axysk) and \(\pmb {\varSigma } _o\) as \(\lbrace \varSigma _i = (\cdot ,\cdot ,(\cdot ,(\cdot ,\cdot ,seq _{i,3})),\cdot )\rbrace _{i \in [n ]}\)

  • If any \(\varSigma _i\) does not exist in \(\mathsf {BB}\), abort. Else, compute \(\pi _{l} \) as in \(\mathsf {Link}\).

  • For all \(\varSigma _i\) in \(\pmb {\varSigma } _o\), compute \(x_i \leftarrow \mathsf {PRF.Eval} (k, 1 || seq _{i,3})\).

  • Return \(\pi _{seq} \leftarrow (\pi _{l}, \lbrace x_i \rbrace _{i \in [n ]})\).

\(\mathsf {VerifySLink} (ipk, lm, \pmb {\varSigma } _o, \pi _{seq}) \rightarrow 0/1\). Verifiers check the link proof as in the basic scheme, and recompute and compare the hash-chain:

  • Parse \(\pi _{seq} \) as \((\pi _{l},\lbrace x_i \rbrace _{i \in [n ]})\), and \(\pmb {\varSigma } _o\) as \(\lbrace \varSigma _i = (\cdot ,\cdot ,(\cdot ,(seq _{i,1}, seq _{i,2},\cdot )),\cdot )\rbrace _{i \in [n ]}\).

  • If any \(\varSigma _i\) does not exist in \(\mathsf {BB}\), return 0. Else, verify \(\pi _{l} \) as in \(\mathsf {VerifyLink}\).

  • Check \(seq _{1,1} = \mathsf {H}' (x_1)\). If not, reject.

  • For \(i \in [2,n ]\), check \(seq _{i,1} = \mathsf {H}' (x_i)\) and \(seq _{i,2} = \mathsf {H}' (x_i \oplus x_{i-1})\). If not, reject.

Efficiently Fetching Previously Created Signatures. Finally, note that users can leverage the \(n _{st}\) values to easily fetch signatures from the bulletin board \(\mathsf {BB}\). If a user has a rough idea of the value of \(st\) when the signature was created, she can use \(\mathsf {PRF}\) to recompute \(n_{st}\) for near \(st\) values. Otherwise, it is always possible to iterate from the initial value until finding the desired signature (as opposed to locally storing all signatures, or iterating through all signatures in \(\mathsf {BB}\)).

4.2.1 Security of Our Construction

Theorem 2

Assuming zero-knowledgeness and simulation-soundness of \(\mathsf {SPK}\), collision resistance of \(\mathsf {H}'\), pseudorandomness of \(\mathsf {PRF}\), and a trusted \(\mathsf {BB}\) verifying signatures and checking uniqueness of \(seq\) (across all signatures in \(\mathsf {BB}\)), our construction is secure under the discrete logarithm, DDH, and q-SDH assumptions, in the random oracle model for \(\mathsf {H}\), \(\mathsf {H}'\) and \(\mathsf {SPK}\).

Proof Sketch. Proving anonymity essentially requires showing that the newly added \(seq\) components can be simulated, which follows from pseudorandomness of \(\mathsf {PRF}\) and the modelling of \(\mathsf {H}\) and \(\mathsf {H}'\) as random oracles.

For sequentiality, we show how to find collisions in \(\mathsf {H}'\), assuming a trusted \(\mathsf {BB}\) verifying signatures and checking uniqueness of their \(seq\) components, and pseudorandomness of \(\mathsf {PRF}\). Since honest signatures must exist in \(\pmb {\varSigma } ^*\), all the attacker can do is to remove or swap honest signatures, or insert dishonest signatures before or after honest ones. However, the adversary commits to the set \(\pmb {\varSigma } '\) of dishonest signatures in the first stage of the game, and he can only use signatures in this set and \(\mathsf {SIG} [\mathsf {uid} ^* ]\) to produce \(\pmb {\varSigma } ^*\). First, the uniqueness checks by \(\mathsf {BB}\) prevent the adversary from creating multiple signatures with the same \(seq\) values and re-order them as desired. Then, we show that to remove or swap honest signatures, or insert malicious ones, the adversary must find different openings to the \(seq _{1}\) or \(seq _2\) values in the committed signatures that are consistent with their hash chain, implying a collision in \(\mathsf {H}'\). This ensures that, before corrupting the user, the probability of the adversary producing a dishonest signature that can be “chained” with an honest one, is negligible.

Full proofs for the new and modified properties are given the full version of this work [19]. The rest of the properties are proven as in the basic scheme.

5 Evaluation and Measurements

Table 3 summarises the functionality provided by the \(\mathsf {UCL}\) and \(\mathsf {sUCL}\) variants proposed in the present work, as well as that of the most related works [23, 26]. The table focuses on the linkability aspects, and on which are the entities that can perform such linking.

Table 3. Functionality comparison between the schemes presented here and [23, 26].

We now analyse the computational and space costs of our constructions, comparing with related work. In Table 4, we denote with \(\mathsf {e} _{\mathbb {G}_X}\), \(\mathsf {p}\) and \(\mathsf {h}\), respectively, an exponentiation (in \(\mathbb {G}_X\)), a pairing and a computation of a hash function; and with \(n\mathbb {G}_1\), \(n\mathbb {Z}_p\), \(n\mathsf {h} \), n elements in \(\mathbb {G}_1\), \(\mathbb {Z}_p\) and hashes, respectively (also, elements associated to the Paillier encryption used in [23] are denoted with \(\mathbb {Z}_{n^2}\)). For the \(\mathsf {SPK}\)s, we use the Fiat-Shamir transform, and for the \(\mathsf {PRF}\) an HMAC construction [4]. The used curve is BLS12-381 [2, 3]. The costs derived from verifying and storing the individual signatures involved in \(\mathsf {Link}\) and \(\mathsf {VerifyLink}\) are omitted, i.e., we only account for the costs derived from storing/computing or verifying the linkability proof itself. Note also that [23] does not include a linking functionality per se. The (mostly) equivalent functionality is a combination of their Blind, Convert and Unblind operations. Thus, in the table we show the aggregate of their costs. In addition, other operations supported by [26], but not compatible with our model, are also omitted. These include their Opn, Lnk and LnkJdg functions (in Table 4, Link and VerifyLink refers to SLnk and SLnkJdg in [26]).

Table 4. Computational (top) and space (bottom) costs. In the “Our scheme” column, we show in black font the costs of the \(\mathsf {UCL}\) scheme (Sect. 3), and the text in red corresponds to the added costs of the \(\mathsf {sUCL}\) scheme (Sect. 4). Since [23, 26] only support explicit linkability, we only compare the linking costs in those schemes against the explicit linking of our schemes. Link costs for [23] aggregate their blinding, converting and unblinding costs. Operations from [26] that are not compatible with our model are omitted.

Figure 6 shows the results of experiments obtained with a C implementation of both variants of our scheme (run on a MacBook Pro 2.5 GHz Quad-Core Intel i7, 16 GB 2133 MHz LPDDR3 RAM), and iterating every trial 1000 times. \(\mathsf {Setup}\), \(\mathsf {Join}\) and \(\mathsf {Issue}\) are omitted, as they will typically take place either rarely or in non time-critical contexts. \(\mathsf {Sign}\) and \(\mathsf {Verify}\) run in well below 5 ms. For \(\mathsf {Link}\) and \(\mathsf {VerifyLink}\) (and the sequential variants), we experiment with sets of 10, 50 and 100 signatures. As in Table 4, this does not include verification of individual signatures. Note that even in the case of 100 signatures, we are still in the order of 40 ms for linking and 20 ms for verifying the proofs. For comparison, [26] reports signing and signature verification times around 100–150 ms, and linking and link verification times (for only two signatures) in the order of 330 ms.

Fig. 6.
figure 6

Costs for \(\mathsf {Sign}\), \(\mathsf {Verify}\) and \(\mathsf {Link}\) (with 10, 50 and 100 signatures).

6 Conclusion

We have presented a new variant of group signatures that allows users to explicitly link large sets of signatures, supports implicit signature linking, and does not rely on a trusted opener. We have then extended this to allow proving order within a sequence of linked signatures, including that no signature has been omitted which was originally produced between the first and last signatures of the sequence. We have also given a formal model capturing the extended unforgeability and privacy properties in this setting, and efficient constructions realizing our model, which we have proved secure under discrete logarithm related assumptions. We have also reported on experimental evaluation obtained from an implementation of our schemes.

Several lines of further work are possible. First, we give an unforgeability property ensuring that order is maintained against honest-then-corrupt users, but we do not consider the equivalent for initially corrupt ones. While we argue that modelling honest-then-corrupt users is applicable to many real-world use cases, it is interesting to consider the stronger variant. In that case, initially, it seems that we can only hope to detect inconsistent proofs. Otherwise, if we only consider independent sequence proofs, a malicious signer may just “precompute” the sequence in the order he intends to prove afterwards, even if he publishes the signatures in a different order. Also, being able to prove non-linkage of signatures may be an interesting functionality – which would also impact the model. In practice, there may be use cases where proving not having issued a (set of) signature(s) can be useful. For instance, as a basic mechanism for (privacy respectful) blacklisting. Efficiency-wise, taking inspiration on [20, 25], a great improvement would be to study the incorporation of batch verification of signatures (in addition to batch linking). On a more specific note, our construction for proving linked sequences introduces an artifact that affects the anonymity property. Namely, separately linking two adjacent sequences (i.e., where the last signature of one sequence was created immediately before the first signature of the other) makes both sequences linkable. Hence, removing this constraint would be an obvious improvement.