Next Article in Journal
Sphingolipidoses and Retinal Involvement: A Comprehensive Review
Previous Article in Journal
Enhanced Emotion Recognition Through Dynamic Restrained Adaptive Loss and Extended Multimodal Bottleneck Transformer
Previous Article in Special Issue
CS-FL: Cross-Zone Secure Federated Learning with Blockchain and a Credibility Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy Illusion: Subliminal Channels in Schnorr-like Blind-Signature Schemes

by
Mirosław Kutyłowski
*,† and
Oliwer Sobolewski
NASK—National Research Institute, 01-045 Warsaw, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(5), 2864; https://doi.org/10.3390/app15052864
Submission received: 21 January 2025 / Revised: 18 February 2025 / Accepted: 27 February 2025 / Published: 6 March 2025

Abstract

:

Featured Application

The mitigation techniques presented in this work should be implemented in client-side applications interfacing devices that implement protocols based on blind-signature schemes for critical purposes (such as self-sovereign identity, anonymous credentials, etc.).

Abstract

Blind signatures are one of the key techniques of Privacy-Enhancing Technologies (PETs). They appear as a component of many schemes, including, in particular, the Privacy Pass technology. Blind-signature schemes provide provable privacy: the signer cannot derive any information about a message signed at user’s request. Unfortunately, in practice, this might be just an illusion. We consider a novel but realistic threat model where the user does not participate in the protocol directly but instead uses a provided black-box device. We then show that the black-box device may be implemented in such a way that, despite a provably secure unblinding procedure, a malicious signer can link the signing protocol transcript with a resulting unblinded signature. Additionally, we show how to transmit any short covert message between the black-box device and the signer. We prove the stealthiness of these attacks in anamorphic cryptography model, where the attack cannot be detected even if all private keys are given to an auditor. At the same time, an auditor will not detect any irregular behavior even if the secret keys of the signer and the device are revealed for audit purposes (anamorphic cryptography model). We analyze the following schemes: (1) Schnorr blind signatures, (2) Tessaro–Zhu blind signatures, and their extensions. We provide a watchdog countermeasure and conclude that similar solutions are necessary in practical implementations to defer most of the threats.

1. Introduction

In recent years, blind signatures, first introduced by Chaum in [1], became a popular tool for implementing various privacy-enhancing applications. For example, they are a core component of some anonymous credentials protocols [2]. Another application (widely used in practice) is a protocol for issuing one-time anonymous authentication tokens—Privacy Pass [3]. The tokens of Privacy Pass are blindly signed random values. A token can be redeemed at any time, and it proves that the person redeeming it is known to the token issuer (and, consequently, is not a bot). Similar techniques for anonymous authentication were used by Google One VPN (Virtual Private Network) [4] and are currently used by Apple Private Relay [5]. Moreover, these techniques have even been proposed as an anonymous authentication method for phone users connecting to 4G/5G networks [6].
Further applications of blind signatures may emerge together with the introduction of the European Digital Identity Wallet (EDIW) according to eIDAS 2.0 regulation [7]. (In fact, the consensus of the cryptographic community is that anonymous credentials protocols should be used to implement the EDIW [8], while, as we mentioned earlier, blind signatures are often their main technical component). The eIDAS 2.0 regulation aims to provide space for certificates of attributes and identification/authentication schemes that reveal only the information that is strictly required for a given purpose. For example, an EDIW should be able to prove that its holder is of legal age, e.g., to let them purchase alcoholic beverages. At the same time, neither the seller nor the issuer of the EDIW should be able to learn the buyer’s identity according to the data-minimization security paradigm. Attribute certificates issued with blind-signature protocols may solve this problem of privacy-preservingprotecting age verification. A similar situation may involve certain sensitive healthcare services, where the health insurance company should be able to check the eligibility of a medical transaction but not necessarily the patient’s identity.
In many countries, there are strict rules regarding privacy protection. The GDPR (General Data Protection Regulation) [9] of the European Union states that all information systems must implement privacy protection by design and by default. Moreover, implementing these principles must be demonstrable (accountability principle). Thus, deploying a well-researched technical solution like a blind-signature scheme may be a pragmatic option where demonstrating compliance with the GDPR rules is cheap, easy, and non-disputable.
Sometimes, the situation enforces rapid progress in privacy protection on a global scale, as it was the case with contact-tracing systems during the COVID-19 pandemic. At this time, the need arose to create a method of contact tracing to help identify potentially infected people and alert them before they infect more people. Phone apps notifying users of exposure have been created to address that problem. Of course, such a system had to be developed in a privacy-preserving manner, as information about who interacts with whom can be critical for personal and public security. This privacy-by-design approach was the key feature of the exposure notification privacy-preserving analytics system [10], developed jointly by Apple and Google. As long as users’ smartphones follow the protocol, the users do not have to worry that the generated data will be misused, say, by criminals. This was the key acceptance condition for this technology in countries like Germany, where residents care a lot about their privacy.

1.1. Cryptographic Schemes Versus Black-Box Implementations

Most cryptographic schemes require some secret keys for protocol participants. These keys must be protected, as the sheer possibility of unauthorized use undermines the whole security concept. This philosophy leads to the strategy of encapsulating cryptographic operations and data in a secure environment and restricting access to this environment so that only the requests specified in the cryptographic scheme are processed. In particular, no direct read access to the memory of this secure environment should be granted. This is a sound strategy, as a seemingly innocent read operation might be an element of an unknown attack. Consequently, it is a good practice to use such black-box devices (implemented either in hardware or software) to run cryptographic procedures.
The concept of a cryptographic black box is a double-edged sword. As long as an implementation follows the specification to the letter, the cryptographic security arguments apply. However, how do we know what is executed by a black-box device? It may only look like an honest and flawless implementation of the scheme. In reality, intentional and non-intentional deviations from the ideal implementation might invalidate the security and privacy arguments. A prominent example of this situation was the flaw in Estonian electronic personal identity cards, where a faulty RSA key-generation procedure resulted in citizens’ public keys being compromised [11]. In an ideal situation, an auditor could conclude that the implementation is correct and secure by analyzing the device’s behavior (output correctness on test data, execution time, power consumption, etc.). Unfortunately, this is almost never the case for cryptographic products. First, a lot of effort is put into ensuring that side-channel analysis does not provide any information about the operations performed. This protects the system from attacks but simultaneously limits the auditor’s capabilities.
Given this, the problem is that a protocol implementation may deviate from the specification and cryptography may provide an indistinguishability proof for it. In that case, unless a fundamental cryptographic assumption is broken, no distinguisher has a non-negligible advantage in distinguishing a genuine implementation from a modified one based solely on the device’s input and output. Of course, this does not preclude detection during a destructive physical inspection in a specialized lab.
An attacker might have various goals when manipulating a protocol implementation. The first goal might be to transmit data to a designated receiver covertly. Classic subliminal channels [12] fall into this category. Another purpose of modification might be to leak secret keys stored on the device (and possibly generated there) to the attacker (and to nobody else). Then, these modifications fall into the category of kleptography [13]. Recently, anamorphic channels received a lot of interest: their additional feature in comparison to subliminal channels is undetectability even if the protocol participants are forced to surrender all secret keys present in the protocol according to the original specification to a distinguisher [14].
Subliminal information channels can be used for evil (like kleptography) and good purposes. For example, a malicious implementation of the popular FIDO2 standard may be used to break the unlinkability properties against a designated party [15]. On the other hand, one can hide information (a watermark) in standard signatures created by legitimate devices so that signatures forged by a third party after breaking the public key can be recognized and nulled [16].
In the case of blind signatures, the malicious goal might be to break anonymity guarantees: a malicious device may enable the signer to link the unblinded signature to the device and thereby to its owner. An attack may also involve transmitting covert data in either direction (e.g., the device’s private keys or description of some tasks to be carried out by the device).
We are mainly concerned with devices, like the future European Digital Identity Wallet (EDIW), which must be developed according to the privacy-by-design approach while implementing digital signature schemes and strong authentication protocols with user attributes. This inevitably leads to an architecture where the cryptographic functionalities are executed in a black box.
Research Problem
The problem we consider in this paper is the following:
Problem 1.
Are blind-signature schemes immune against malicious implementation setups that invalidate the privacy properties of blind signatures and are undetectable from the point of view of an auditor in the anamorphic cryptography model, where an auditor can request the secret keys?
If yes, are such latent attacks simple enough to be applicable in practice? Is it possible to prevent them in a pragmatic way?

1.2. Our Contribution

We introduce a formal model for creating covert/subliminal channels in blind-signature schemes with a black-box device. We present multiple methods of creating such subliminal channels in blind Schnorr signatures [17,18] and recent extensions of Schnorr signatures [19,20,21]. We then show how to use these subliminal channels to break the blindness/unlinkability property of the aforementioned schemes and how to securely transmit covert messages in the presence of a powerful auditor. These attacks remain undetectable even if the auditor requests private keys that exist according to the system specification (anamorphic cryptography model [14]).
We also show methods of mitigating subliminal channels for the schemes mentioned. Typically, the defense is based on an external watchdog, which serves as a proxy between the client’s black-box device and the signing server. In the context of this work, the watchdog can be considered an open-source client application trusted by the user.

1.3. Impact of the Findings

The goal of this paper is to provide strong arguments for caution in putting to use cryptographic products promising privacy protection. We show that we cannot rely solely on the cryptographic strength of blind-signature schemes—including the most prominent ones—and we have to control the actual implementation effectively. Otherwise, a product promising privacy-by-design might turn out to be a Trojan horse and serve rather as a Big Brother system. We show what can and will go wrong.
Understanding the attack potential is the first step of preventing them. As blind signatures may become an essential primitive in the context of the GDPR and eIDAS2.0 regulations and the concept of Self-Sovereign Identity (SSI), the issues discussed here may profoundly impact security and privacy protection in the wild. Note that online information systems will often have to deal with sensitive personal information, so we expect widespread use of blind signatures shortly, e.g., for attribute-based authentication and authentication tokens.
In the paper, we show simple and fairly straightforward methods of muting (closing) the subliminal channels and thus mitigating the aforementioned attacks. We believe that these countermeasures should be implemented in all client applications that interface black-box devices as long as we are not absolutely sure about the black-box implementation.

1.4. Structure of the Article

In Section 2, we present related work. In Section 3, we describe the mathematical notation used in the paper and introduce cryptographic models for the blind-signature schemes with a black-box device on the user’s side. We also discuss types of attacks in our model alongside our assumptions and their rationale. In Section 4, we briefly recall the Schnorr blind-signature scheme and present attacks. We start with a secret key agreement, and then expand it to linking attacks (revealing which device has created an unblinded signature). Finally, we show how to create covert channels between the user’s device and the signing server, undetectable for the user and for the auditor. The attack details differ depending on whether the attack results are available after presenting an unblinded signature or immediately after blind-signature creation. Then, we show that a watchdog between the user device and the signing service would effectively block the attacks. In Section 5, we perform a similar analysis for the Tessaro–Zhu blind-signature scheme and its extensions proposed recently. We show that the modifications made to these constructions to strengthen their model turn into weaknesses in the more realistic setting, as they allow for anamorphic channels. Again, we show that there is an effective defense based on a watchdog. Section 6 is reserved for conclusions and laying out future work.

2. Related Work

Blind-signature schemes have been around for a long time. Due to that, they are quite well researched and have a well-scrutinized security model [22,23] that captures the properties of one-more-unforgeability and malicious signer blindness. Many works are enhancing blind signatures with additional features, like, e.g., non-interactivity [24], post-quantum security [25], partial blindness [26], etc.
Anamorphic channels in generic signature schemes have been studied in [27]. The Dictator model used in [27] and in our work has been introduced in [14].
The watchdog countermeasure implemented in this work is somewhat similar to the concept of trusted “amalgamation” functions defined in [28] (where the suspected subverted protocol Π is decomposed into multiple parts and then amalgamated by some trusted function in a way that closes the subliminal channel) and used, e.g., in [29] to achieve subversion resilience.

3. Methods

In this section we describe the notation, define cryptographic models, and present the threat model and assumptions used in subsequent analysis.

3.1. Notation

In the paper, we use multiplicative group notation, although a target implementation may be based on elliptic curves, where additive group notation is typically used. For simplicity of presentation, we assume that the computations are performed mod p for a prime number p unless stated otherwise. Let g Z p be an element of prime order q. We use a subgroup g of Z p generated by g. However, most considerations apply almost directly to elliptic curve groups. If any non-trivial modification is required, we indicate the necessary changes.
We define a cryptographically secure hash function H 1 : { 0 , 1 } Z q that maps any bit string to a number in Z q . In addition, when we use without any index, we refer to any standard cryptographically secure hash function such as, e.g., SHA-3.
By ( a , b , c , ) , we denote applying the function to a single input a b c , where ∥ denotes concatenation. Sampling uniformly at random, an element x from set S is denoted by x S . We also define the bitlen ( x ) function to return log x (the bit length of x). We let a { i , , j } denote the substring of a starting at position i and terminating at position j. By x = PRNG ( seed ) , we mean taking a value x from a pseudorandom number generator PRNG seeded with the value seed . The exact format of the value x will follow from the context of invocation of PRNG . (For simplicity, we overload the notation; sometimes we need PRNG to return, e.g., a 32-byte number, and other times to return an element from Z q .)
For interactive algorithms A and B , by ( a , b ) A ( x ) , B ( y ) we denote joint execution, where x is the private input of A , y is the private input of B , a is the private output of A , and b is the private output of B .

3.2. Blind Signatures

A blind-signature scheme enables a User U to obtain digital signatures of a Signer S for arbitrarily chosen messages. The key feature of a blind-signature scheme is that S not only does not see the signatures created in interaction with the User but also cannot determine which message has been signed for whom if all these signatures are presented to S .
In more detail, there are the following roles in the protocol:
  • Signer S is a party that holds private signing keys and signs (blindly) messages on request of Users,
  • User U is a party that requests signatures of S over chosen messages; all cryptographic operations of the scheme are executed by a black-box device D of U (we assume that D may have a pair of keys sk D Z q , pk D = g sk D used, e.g., for authentication);
  • A verifier V checks the signatures allegedly created by S .
A practical use case of a blind-signature scheme according to our model can be seen on Figure 1 (User login), Figure 2 (blind-signature creation) and Figure 3 (signature verification). Sometimes, the Signer and the Verifier are the same party, as the Signer can apply the verification procedure to signatures that appear on the market. Note that, in practice, U stands also for some electronic devices, but, in contrast to D , it can be controlled (at least to some extent) by its owner— a physical person.
A blind-signature scheme consists of three procedures:
Definition 1.
(Blind-Signature Scheme). A blind-signature scheme Π is a tuple of algorithms Π = ( Gen , SignS , SignU , Verify ) , where:
1.
The  key-generation procedure  Gen ( 1 n ) takes as input the security parameter n and outputs a pair of signing keys ( sk , pk ) for the Signer S .
2.
The  signing procedure is an interactive protocol, where S runs SignS ( sk ) , and D runs SignU ( pk , m ) on behalf of a User:
( , σ ) BSignS ( sk ) , SignU ( pk , m )
For message m M ( M is the set of possible messages to be signed), the joint execution outputs a signature σ on the side of the User U . A transcript T of the interaction (all messages exchanged between S and D during joint execution) is visible to all parties (including, in particular, the User).
3.
The verification procedure Verify ( pk , m , σ ) takes as input a public key pk , a message m , and a signature σ. It returns a bit b { 0 , 1 } , where 1 stands for ‘the signature is valid’.
For  correctness, we require that for any keypair ( sk , pk ) generated by Gen and a message m M :
if ( , σ ) SignS ( sk ) , SignU ( pk , m ) ) , then Verify ( pk , m , σ ) = 1 .
Properties of Blind Signatures
There are two fundamental security and privacy properties of blind signatures:
  • One-more unforgeablity: This property captures the unforgeability of signatures: no signature can be created without running the interactive procedure with S . Namely, in the security game, the Adversary can interact with S and receive valid signatures σ i for i { 1 , , n } on messages m i of their choice. The Adversary wins if they can present a valid signature σ { σ 1 , , σ n } .
    The scheme satisfies one-more unforgeability property if the probability of winning the game by the Adversary is negligible.
  • Blindness: This property captures the fact that S cannot link the signatures with the sessions executed with the User. It is expressed by the following game: first, the Adversary chooses messages m 0 , m 1 , generates signing keys ( sk , pk ) , and sends m 0 and m 1 to the User. Next, the User chooses a bit b at random. Then, two instances of blind signing procedures are started (and possibly run concurrently) with the Adversary being the Signer. In the first instance, the User obtains a valid signature of m b signed with sk ; in the second, it obtains a valid signature for m 1 b signed with sk . The resulting valid unblinded signatures σ 0 , σ 1 are presented to the Adversary. The Adversary presents a bit b and wins the game if b = b .
    The scheme satisfies the blindness property if the probability of winning the game by the Adversary is 1 2 + ϵ , where ϵ is a negligibly small value.
In a standard use case, the User is authenticated before starting the signature creation procedure by the Signer S , since typically, signing blindly is a service offered only to entitled Users. Hence, by presenting a valid signature σ of S , the User implicitly proves that σ has been obtained by an eligible party.

3.3. Audit and Threat Model

We consider blind-signature schemes for which cryptographic proofs for blindness and one-more-unforgeability features already exist. For these proofs, the Adversary is either a malicious Signer (eager to learn what he signs) or a malicious User (aiming to derive more signatures of S than requested).
The scenario that we consider is different but motivated by privacy protection in the wild: we consider the case where the Signer S and a device D attempt to cheat the User holding D . In almost all cases, the User can neither inspect the Signer nor check D . All the User can do is eavesdrop on the messages exchanged between S and D and, possibly, inspect some security declarations and/or certificates for D and S issued by third parties. Persuaded by the certificates and provable but theoretical properties of a blind-signature scheme, the User may have a false sense of security and privacy. Namely, the certificates and declarations might be meaningless since the User may be given a malicious device different from those that undergo the certification process, and S may run the services not as declared. As long as there are no observable protocol deviations, the User might be in a lost position.
There are a few attack vectors carried out by the colluding Signer S and device D :
  • Unblinding known signatures: This attack enables S to decide whether a given signature σ has been created during a given session with D (effectively showing which device has derived σ ); the attack may require the message signed by σ . If effective, this attack dismantles privacy protection for signatures that emerge on the market.
  • Immediate unblinding: This attack enables S to derive the same unblinded signature as that obtained by D while running the blind-signature creation procedure;
  • Covert transmission to Signer: In this attack, D sends a covert message to S during blind-signature creation. In particular, the covert message can contain the message to be signed blindly for immediate unblinding.
  • Covert transmission to device: In this attack, D sends a covert message to S during blind-signature creation. In particular, the covert message can contain secret instructions for the device.
In all the above cases, the Signer, which has been deployed as an important component of the privacy protection ecosystem, turns out to be a “Big Brother” spying on its Users.
To prevent the mentioned attacks, one must either redesign the schemes so that the Users obtain more control over the situation or run an appropriate audit and certification framework together with a strict evaluation of the cryptographic schemes. Somehow, the second approach is the strategy more frequently implemented in practice.

3.3.1. Auditor Model

In this paper, we consider the model of a Dictator (in our work called an Auditor) borrowed from [14] and subsequently also applied to digital signatures [27].
Definition 2
(Auditor). An Auditor J can observe any interaction between S and D (see Figure 4). Following the anamorphic cryptography concept, J may require S and/or D to hand over all private keys they should hold for protocol execution. In particular, this concerns the signing key of S and data used to authenticate D against S . Based on that, J can analyze all past interactions between S and D (presumably recorded for this purpose).
Note that Definition 2 makes a silent assumption that after surrendering the keys to J , the public key pk of the Signer S must be revoked. Indeed, one cannot claim anymore that a signature verified with pk originates from S .
Note that according to the definition, J will obtain any private key created outside the scope of scheme specification (so-called dual keys according to the terminology in [14]). The queried parties can claim that such keys do not exist.
In anamorphic cryptography, the name “Dictator” is used for the entity that requests the private keys from protocol participants. In our case, the Dictator is on the “good” side, helping to thwart the attacks against Users’ privacy. Hence, we use a neutral Auditor term to avoid confusion. In real-world applications, it is easy to imagine that in cases of a suspected attack of the described kind, a competent authority may request access to valid keys from the legal entity that provides the blind signing service and access to the keys used by the devices. These prerogatives can be well motivated by obligations to supervise critical trust services.
Last but not least, due to advances in cryptanalytic algorithms and hardware, the Signer’s public key can be broken at some point. So, the model considered applies immediately.

3.3.2. Indistinguishability

The main objective of malicious parties S and D is to run the blind-signature scheme so that the goals of an attack are fulfilled, while on the other hand, J has no chance to detect deviations from the original protocol. “Impossibility” can be captured formally by the following left-or-right game.
Definition 3
(Indistinguishability Game). The game consists of the following phases:
  • S and D choose jointly a bit b { 0 , 1 } uniformly at random,
  • If b = 0 , they install the original blind-signature scheme Π. If b = 1 , they install a modified scheme Π implementing the attack.
  • J inspects S and D as described in Definition 2.
  • J outputs a bit b .
If b = b , then J wins the game.
Definition 4.
We say that the schemes Π , Π are indistinguishable for an auditor J if J wins the game from Definition 3 with probability 1 2 + ϵ , where ϵ is a negligible value.

3.4. Watchdog Defense

We shall find that for many important blind-signature schemes Π , one can find a modified malicious scheme Π implementing one of the above-mentioned attacks so that Π and Π are indistinguishable. In this situation, due to the fundamental role of blind signatures in many trust services, it is necessary to find a remedy to the situation.
In our opinion, the easiest way is to create a watchdog  W running in the User’s system and serving as a man-in-the-middle between S and D . For pragmatic reasons, W should be in some sense transparent to S and D : if they run their protocol honestly, they will not notice any change in the protocol behavior. However, if they run a modified malicious protocol, W should prevent them from achieving attack goals.
Typically, a watchdog randomizes any nondeterministic element transmitted between the protocol participants. In that way, almost all potential channels to transmit covert data between malicious participants are blocked (we say “almost”, as, for example, there is still a possibility of leaking data by interrupting the execution, introducing transmission failures, etc., or by abusing the probabilistic nature of signatures via rejection sampling).
For both Schnorr blind signatures and Tessaro–Zhu blind signatures, we show that a watchdog that randomizes values on the wire is provably effective. In particular, we show that such a watchdog makes it impossible for the Signer and device to link the unblinded signature with a signing session using subliminal channels in the signing procedure and, by extension, mitigates these subliminal channels. Unfortunately, linking can still be carried out using other subliminal channels, e.g., in the message m (e.g., if m is a random element selected by the device) or in the unblinded signature σ itself by means of rejection sampling, as mentioned earlier.

4. Blind Schnorr Signatures—Results and Discussion

The blind Schnorr signature scheme [17,18] is one of the most straightforward realizations of blind signatures but it suffers from some security shortcomings regarding the one-more unforgeability property [30,31] when concurrent signing sessions are allowed. Still, with some caution, it can be safely utilized in most practical use cases. The Signer holds a private key sk S Z q and the corresponding public key pk S = g sk S . The signing procedure is presented on Figure 5 (one could alternatively consider a variant of the scheme where the signature is σ = ( R , s ) ). The verification procedure of a signature σ on a message m with regard to a public key pk is exactly the same as for the regular Schnorr signatures and proceeds as in Figure 5.

4.1. Establishing a Shared Secret by S and D

The first step of most of our attacks is to establish a shared secret (anamorphic key) ak between the Signer S and the device D . The difficulty lies in the fact that no out-of-bound channel can be used; the key agreement must be undetectable and well hidden in the regular interaction.
The simplest way to establish a shared key is a static–static Diffie–Hellman (DH) protocol, where public keys pk S and pk D = g sk D are used to derive a shared secret ak = pk D sk S = pk S sk D . However, the problem is that the Auditor J may ask for secret keys that are kept by S and D according to the system specification (the request to surrender the private keys may concern any keys, not only sk S ). In that case, J would calculate the shared secret ak derived as above. In that case, any information transferred between S and D and hidden with the key ak would be readable for the Auditor J .
For the same reason, the static–ephemeral Diffie–Hellman protocol for deriving the shared secret is not applicable in the attack. Indeed, the Auditor needs to request just one secret key to derive the shared secret. So, the only choice seems to be an ephemeral–ephemeral Diffie–Hellman key exchange.
On the side of the Signer, one can attempt to use R as a share for Diffie–Hellman key exchange. The Auditor cannot ask for r, as S can claim that it has been erased after the protocol execution for security reasons (or just to conserve space). However, it does not help if S finalizes the protocol execution with s = r + c · sk S mod q : after gaining sk S , the Auditor would be able to derive r from this equality.
We have to conclude that to hide manipulations in the protocol, the secret key ak has to be established from the Diffie–Hellman shares, where S and D either forget the discrete logarithms effectively or claim not to know them at any time after the protocol is completed.
Our plan is to create shares of ephemeral–ephemeral Diffie–Hellman key exchange using exclusively the messages exchanged during the creation of blind signatures. The hidden key agreement will require a few steps, as presented below.

4.1.1. Transmitting a Diffie–Hellman Ephemeral Share from S to D

The trick is to use the element R = g r as the share of S . This may seem to contradict what we have said above (r can be derived by J when sk S is surrendered). However, the strategy of S is to interrupt the protocol execution and report an error for some reasonable reason (e.g., “server overloaded”). In this case, the corresponding value s is not transmitted to D and the only element dependent on r and visible to J is R = g r . So, we are in the situation of the DH key-exchange protocol. The details are given in Figure 6.

4.1.2. Transmitting a DH Ephemeral Share from D to S

Transmitting a DH share from D to S is more complicated. The share should be hidden inside c, as it is the only element sent by D . Fortunately, c is in some sense random, as c = c + β mod q , where β can be freely chosen. However, c is not a group element but belongs to Z q . So, we need to somehow encode a random group element as an element of Z q . It is impossible to do it perfectly, but achieving this goal with some redundancy will suffice.
In the first step, D chooses y Z q at random and calculates Y = g y . At this moment, Y must be mapped into Z q and c = encode ( Y ) for the chosen function encode . As already mentioned, this might be difficult. In the following, we show an example where the inversion mapping decode ( Y ) provides more than one candidate for Y.
Recall that for the sake of simplicity, we consider a subgroup of Z p of prime order q, where p = 2 q + 1 (it is the case for a strong DDH-secure Schnorr group). In this case, we may choose a mapping encode ( Y ) = Y mod q . Note that since q = p 1 2 , we have that either Y = c or Y = c + q mod p . In this case, decode ( Y ) = { Y , Y + q mod p } .
With more effort, one can define analogous mappings for elliptic curve groups. Attention should be paid to the resulting probability distribution of c: it should be indistinguishable from the uniform distribution in Z q . This can be technically involved (compare [32]).
The details of the described procedure are shown in Figure 7. Note that the attack can be detected if D is immediately asked for a Schnorr signature resulting from this interaction. Namely, D would have to find c such that c = H 1 ( g α · R · pk S c c mod q , m ) . Solving this equation for c is infeasible, even if one can freely choose α . So again, a good choice is to create a “server error” instead of sending the response s.
Note that as long as the mapping encode on random group elements yields the elements of Z q with probability distribution not distinguishable from the uniform distribution, the above execution is indistinguishable by the Auditor J from an execution of the original protocol.

4.1.3. Generating ak

On the side of D , there are shares R , Y = g y and D knows y, so it can derive the shared key K = KDF ( R y ) (where KDF is some secure key-derivation function). On the side of S , the situation is more complicated, as decode ( c ) may provide more than one candidate for the share of D . In particular, for the example mapping defined above, S can compute two candidates for ak :
ak 0 = KDF ( c r mod p ) , ak 1 = KDF ( ( c + q ) r mod p ) .
As there are no further messages from D to S during a single protocol invocation, there is no way to inform S which out of ak 0 and ak 1 is the correct shared value.
If matching between ak and one of ak 0 and ak 1 is necessary, one can use the next protocol invocation. Then, D can choose the challenge c 1 = H 1 ( g α 1 · R 1 · pk S β 1 , m 1 ) to carry this message. For example, D can choose c 1 < q / 2 (by randomizing blinding factors α , β until the hash function outputs value in the desired range) if Y = c in the previous protocol invocation, and c 1 q / 2 otherwise.

4.2. Linking with the Shared Key ak

We assume that D and S already share a secret ak . The goal of the attack described below is to enable linking a Schnorr signature σ derived by D with an interaction between S and D . As D is not anonymous during this interaction, such linking reveals the User who received σ . Thus, the attack aims to break the blindness property. As a side effect of the proposed construction, the device D can also create a hidden ciphertext that is readable only by S .
The key point for linking is creating a hidden relation between α and β by D . Namely, first assume that α is created at random, as in the original scheme, while β depends on α , the session, and a hidden message m . For example, let m < q / 2 20 be a covert message. Then, β is calculated as
β = α + PRNG ( 0 , ak , R ) + m mod q
where PRNG is a pseudorandom number generator that yields elements of Z q , and R is the element sent by S as the first message in the current session. So, we use a one-time pad in Z q , where the key is session-specific and depends on the shared secret ak . Note that the link between α , β and D is witnessed by the fact that
β α PRNG ( 0 , ak , R ) mod q < q / 2 20
In the case of using a wrong key ak , the calculated number is a random element of Z q , so the probability of a false positive is about 2 20 . Of course, the choice of m < q / 2 20 is somewhat arbitrary but seems to be good enough in practice.
In order to extend the bandwidth of a covert channel, D may generate α not at random but derive it with ak , R and the second covert message m < q . Namely,
α = PRNG ( 1 , ak , R ) + m mod q
(note that we use the same arguments ak and R as before, but slightly change the input to obtain a different pseudorandom element).
The details of the procedure executed by D during signature creation are shown in Figure 8. The only difference is that α and β are generated in a modified way. Note that the change is undetectable by the auditor J as long as the output of PRNG is not distinguishable from random sampling in Z q . However, ak is not available to J , so we may use a cryptographically strong generator PRNG with this property.
Now, let us discuss how a signature can be linked with a transcript of a signing procedure. So assume that S holds a transcript ( R , c , s ) of an interaction with D and a signature ( s , c ) found in the cyberspace. The goal is to check whether ( s , c ) corresponds to this interaction (consequently, whether ( s , c ) has been used by the owner of D ). In the positive case, the covert messages m and m should be retrieved as well. The test details are given in Figure 8. First, α , β are recalculated by computing
1.
α = s s mod q ,
2.
β = c c mod q ,
Then, one can test whether the link is correct by deriving
m = β α PRNG ( 0 , ak , R ) mod q
and checking that
m < q / 2 20
If this is the case, then m and m = β α PRNG ( 1 , ak , R ) mod q are accepted as covert messages from D .
Comments
First, if S has two candidates ak 0 and ak 1 for the shared key, then the test is performed both for ak 0 and ak 1 . The only effect is that it increases the probability of a false-positive result from 2 20 to about 2 19 .
Another important issue is that S does not need to hold the message m for which ( c , s ) is a valid signature. This makes linking easier since sometimes the signatures are available, but the data signed are still undisclosed due to data protection issues.
Unblinding During Signature Creation
The presented attack can be slightly adjusted, so that during blind-signature creation, S learns the message m . The only prerequisite is that the domain of potential messages M is small enough to run a test for all messages m M . Moreover, J cannot inject hidden messages m , m anymore.
For this attack, the parameters α , β are calculated by D in a slightly simplified way (as before, PRNG is assumed to return numbers from Z q and be indistinguishable from a true random number generator):
1.
α = PRNG ( 0 , ak , R ) ,
2.
β = PRNG ( 1 , ak , R ) .
The remaining steps of the blind-signature creation are unchanged. In particular, note that for the Auditor J , the numbers α , β are not distinguishable from a pair of elements of Z q selected at random. Hence, J cannot detect any deviation from the original protocol.
Apart from the steps specified by the original protocol, S can run the procedure shown in Figure 9. Its main idea is to recalculate α , β , then derive R and c , and check the hash value for each message from M until a match is found.
Since S can compute s as s = s + α mod q , the signature ( s , c ) can be immediately attributed to D .
Comment
Note that if blind signatures are used for tasks like e-voting, then all possible plaintexts (the message space M ) are known. Indeed, they must not be modified in any way; otherwise, the decrypted ballot would serve as a witness about the vote cast. Moreover, the number of voting options in M is small. So, S can immediately verify the signature ( c , s ) for each voting option and learn the vote cast with D . This would be a disaster from the point of view of an election process.

4.3. Linking Without the Shared Key ak

Now we assume that S does not know any public keys representing D and there is no secret ak computed as described before and shared by S with D . Nevertheless, we shall see that malicious D and S can break the blindness property in an undetectable way. A single asymmetric key will be used.
The first stage of the attack is the same as in Section 4.1.1: as a result, S holds r 0 associated with D , while D holds R 0 = g r 0 .
Equipped with the key R 0 , the device D can create α , β so that they are in a relation recognizable by a party holding r and completely random from the point of view of an Auditor J . An instantiation of such relation is creating an ElGamal ciphertext of R (from the current interaction) and then mapping both parts of the ciphertext to Z q , as has been carried out in Section 4.1.2. The rest of the signing protocol is unchanged (see Figure 10).
ElGamal ciphertexts ( R 0 k · R , g k ) are not distinguishable from random pairs of group elements, so, for a proper mapping encode , we can conclude that an Auditor J will see no difference between the execution from Figure 10 and the original one.
Now, let us focus on the linking mechanism. When a signature ( c , s ) has to be compared with a transcript ( R , c , s ) , the first step is to derive candidate values α = s s mod q and β = c c mod q . Then, we can apply the decode function to α and β . Every combination of the results can be treated as an ElGamal ciphertext created with the public key R 0 . It can be decrypted with r 0 and tested whether the result is R. For details, see Figure 11.
Comment
The above construction can be modified slightly to enable D to insert a covert message for S . Namely, instead of encrypting R, one can encrypt a bit string M representing an element of the group, where
M = H ( R ) { t + 1 , , } m
( is the bit length of the output of the H function.)
Now, the test procedure will not check equality with R but with the last t bits of H ( R ) . If t is big enough (say, t = 40 ), then the false-positive probability is practically negligible, while there are plenty of bit positions left for the message m .

4.4. Countermeasures

The simplest solution to the problem of a black-box device D and S corrupting the blind-signature scheme is to use a watchdog. The watchdog serves as a proxy between D and S that modifies exchanged messages in an oblivious manner. In a real-life scenario, the watchdog could be a trusted software client for a hardware signing device. In case an Auditor wants to implement it, they can force the signing protocol users to run all communication through their proxy.
The watchdog influences the communication between Alice and Bob using randomly chosen α and β (see Figure 12):
  • instead of passing R, the watchdog delivers R = R · pk S β · g α ;
  • instead of passing c, the watchdog delivers c = c + β mod q ;
  • instead of passing s, the watchdog delivers s = s + α mod q .
Since the order q is a prime number, the elements R , c are randomized so that the result is uniformly distributed in the respective group. The third message s computed by S is deterministic and must be adjusted by setting s = s + α mod q to obtain a valid signature. For honest parties, the protocol is indistinguishable from the one without a watchdog.
The effectiveness of the watchdog protection follows from the following property:
Theorem 1.
Consider invocations Π 1 , Π 2 of blind Schnorr signature creation executed by S with devices D 1 , D 2 (where D 1 and D 2 might be the same). Let the messages seen by S during the execution of Π 1 be R 1 , c 1 , s 1 . Let R 2 , c 2 , s 2 be the messages seen by D 2 during execution of Π 2 . Then, there is a valid execution of the signing procedure with a watchdog, where the messages exchanged are R 1 , c 1 , s 1 from the point of view of S , and R 2 , c 2 , s 2 from the point of view of D 2 .
In other words, given a signature ( c , s ) and the full history of interactions with the clients, the Signer S cannot exclude any session for being the one where ( c , s ) was created. This means that it is impossible to link an unblinded signature with the signing session using subliminal channels in the signing procedure. As we will see later, it does not mean that all attacks are stopped, as it is still possible to carry out a linking attack using a subliminal channel in the unblinded signature σ .
Proof. 
We have to find α , β that “couple” both executions. First, let β = c 1 c 2 mod q . Then, we have to show that there is α such that
R 2 = R 1 · pk S β · g α
That is, we ask for α such that
g α = R 2 / R 1 · pk S β .
The value on the right-hand side is already fixed, and g is a group generator, so there is such α .
Finally, we observe that in this case, we automatically have s 2 = s 1 + α mod q . Indeed, due to correctness, we have
g s 2 = R 2 · pk S c 2 , g s 1 = R 1 · pk S c 1
By substitutions and reductions, we obtain the following equivalent equalities:
s 2 = ? s 1 + α mod q g s 2 = ? g s 1 + α R 2 · pk S c 2 = ? R 1 · pk S c 1 · g α R 1 · pk S β · g α · pk S c 2 = ? R 1 · pk S c 1 · g α pk S β · pk S c 2 = ? pk S c 1
The last equation holds, as β + c 2 = c 1 mod q . □
Corollary 1.
If S links a signature ( c , s ) and a transcript ( R , c , s ) with a device D , it must occur independently of ( R , c , s ) .
In Section 4.5, we show that D has only some limited possibilities to inject its own identifier to a signature ( c , s ) . Indeed, the only non-deterministic step is freely choosing α and β . The problem is that α and β are only used to transform R to R . The element R will be reconstructed during signature verification and potentially can be the source of information on D . However, the mapping R R = R is a kind of one-way function as R = g α · R · pk S β . So, any intelligent way of choosing α and β is doomed to fail.
Comments
Let us note that the watchdog also protects against executions where D or S send error messages. If D interrupts, then there is no advantage for malicious S and D , as D gets an element stochastically independent of the element sent by S . So, the same effect can be achieved by simulating the message received.
If the protocol is interrupted by S after receiving the modified c, then the communication transcripts on the side of S and D are stochastically independent. So again, the same effect can be achieved by simulating the responses. On the other hand, without communication, no data can be transmitted between D and S .

4.5. The Remaining Threat—Rejection Sampling Attack

Let us note that a malicious device D has an opportunity to partially disclose its identity in a blind signature due to the rejection sampling technique, and no watchdog can prevent it. For this purpose, D must use a hidden key K A of an adversary. The details are presented in Figure 13.
The idea of the attack is to adjust R so that H ( R , K A ) has prefix bits identical with I D . Since H is a pseudorandom function, the only way to find such R is by random sampling the values of R . So, in the case of failure detected at Step 5, we simply backtrack to Step 3. Note that the parameter k must be small (e.g., k = 4 ) to skip backtracking in a reasonable time. So, only a few bits of information on the identity of D can be leaked.
The reconstruction of I D from a signature ( c , s ) is immediate, as R is recalculated during the signature verification process as g s · pk S c . Then, the adversary knowing K A can compute H ( R , K A ) and truncate it to I D . On the other hand, an Auditor J cannot derive any information on H ( R , K A ) without K A , if H is a correlated-input secure hash function [33].
Note that no watchdog can prevent this attack without introducing additional message exchange and modifying the blind-signature creation protocol. The reason is that R is determined on the side of D immediately after choosing α , β and remains unchanged until signature verification, where it is used as an argument of H .
Despite the short length of I D (and lack of uniqueness in most scenarios), the presented attack must be considered as a serious risk in some situations. For example, if blind signatures are used for e-voting in a small committee, there is a serious risk of vote deanonymization by the Adversary holding K A .

5. Tessaro–Zhu Blind Signatures—Results and Discussion

The Tessaro–Zhu protocol [19] is a blind-signature scheme based on blind Schnorr signatures. The motivation to extend blind Schnorr signatures was that their security is based on the assumption of hardness of the ROS problem [30]: in the case of solving ROS, the one-more unforgeability property will be automatically broken. This has only been a theoretical issue until, in 2020, an efficient way of solving the ROS problem was presented [31]. This renders blind Schnorr signatures (if created concurrently) formally insecure.
The Tessaro–Zhu blind-signature scheme has been designed so that the ROS problem does not affect its security. It breaks the linearity of the Schnorr system by introducing additional random values exchanged between the parties. Below, we show that while solving one problem, the new construction opens doors for wide subliminal channels. First, we analyze the BS 1 variant from [19], which seems to be most commonly referred to.
The signing and verification procedures for the BS 1 variant of Tessaro–Zhu blind signatures are presented on Figure 14.
Note that each signature component c , s , y is randomized by D using fresh random values (by, respectively, r 2 , r 1 , and γ ).

5.1. Establishing a Shared Key ak

Obviously, S and D can use the methods presented for the blind Schnorr signatures to send their shares for hidden Diffie–Hellman key exchange. However, surprisingly, for Tessaro–Zhu signatures, there are additional opportunities for malicious parties to increase the attack potential.

5.1.1. Transmitting the Share of S

The key observation is that transmitting the share of S to D does not require interruption with an error message anymore. Namely, instead of using A as a share, S can transmit its share in y. Instead of choosing y at random, the following steps are executed:
1.
z S Z q ,
2.
Z S = g z S ,
3.
y = encode ( Z S ) + H ( A ) mod q
where encode is the mapping from Section 4.1.2. The element H ( A ) is used as a pseudorandom shift.
Note that D will have more than one candidate for the share of S after applying decode ( y H ( A ) mod q ) . This will lead to multiple candidate keys ak , but this does not prevent the linking attacks, as before.

5.1.2. Transmitting the Share of D

One can avoid an error message by encoding the share of D in γ : Instead of choosing γ at random, D executes the following steps:
1.
z D Z q ,
2.
Z D = g z D ,
3.
γ = encode ( Z D ) + H ( Y , A ) mod q
The only difference is that γ is not visible immediately to S . However, D can make the signature ( c , s , y ) somehow available to S . Then, γ can be derived as y / y mod q .
Later, we shall see that the keys ak can be confirmed during linking attacks. So, S can crawl through all available signatures, compute candidate parameters γ , derive corresponding candidates for ak , and detect the matches.

5.1.3. Generating ak

As before, the shared key can be calculated as K = KDF ( Z D z S ) = KDF ( Z S z D ) . Multiple candidates are possible as well.

5.2. Linking with a Shared Secret

The attack from Section 4.2 can now be extended as instead of α , β Z q , the device D now uses three elements r 1 , r 2 , γ in the same way as α and β before. This increases the space available for covert messages by more than 50%.

5.3. Linking with No Shared Key

As transmitting the dedicated public key from S to D can be achieved in a single valid execution of the blind-signature creation procedure, one can focus more attention on extending the attack from Section 4.3. The difference is that for Schnorr blind signatures, we had elements α , β Z q that carried a single ElGamal ciphertext. Now, the elements r 1 , r 2 , γ can play a similar role.
To perform the attack, we need two hidden public keys pk 1 , pk 2 of S dedicated to D . Simply, pk 1 is set during the first interaction with D , while pk 2 is determined during the second interaction with D .
When pk 1 , pk 2 are set, one can create linkable signatures that additionally contain covert messages m 1 < q / 2 20 , m 2 < q / 2 20 . Instead of calculating r 1 , r 2 , γ at random, the following steps are executed:
1.
ψ Z q ,
2.
γ = encode ( g ψ ) ,
3.
r 1 = encode ( pk 1 ψ · m 1 )
4.
r 2 = encode ( pk 2 ψ · m 2 )
The conditions m 1 < q / 2 20 , m 2 < q / 2 20 are somewhat arbitrary, but their purpose is:
1.
Eliminate wrong guesses for the identity of D (for wrong keys of D , after decryption we would obtain random elements of Z q , hence not in the required range with a high probability;
2.
Eliminate wrong elements delivered by decode .

5.4. Attacks on Related Schemes

The blind-signature scheme BS1 from [19] has been followed by related schemes:
1.
Scheme BS3 from the same paper [19];
2.
Schemes BS and BSr3 from [34] (see also CRYPTO’23 version [21]);
3.
Schemes BS1, BS2, BS3 from [35] (see also CRYPTO’24 version [20]).
These schemes differ from the point of view of security proof and underlying assumptions needed for the proof. The price to be paid for stronger security and privacy statements is their complexity: the schemes become more complicated, and many new elements and messages appear during protocol execution.
Unfortunately, these stronger protocols are even more vulnerable to our attacks and due to protocol complexities, there is even more room for covert messages. In the following, we do not provide specifications of the mentioned schemes (as they are too long) but instead indicate what parameters can be used for the attack so that an interested reader can easily reproduce the attacks. We slightly change the notation—for all papers, we assume that the group order is q and not p as in [34,35].

5.4.1. Establishing a Public Key of S Dedicated for D

As before, the scenario is that during the first interaction between S and D , the Signer presents a public key pk to D in a way that S can claim to be unaware of the related secret key. This key is used for static–ephemeral Diffie–Hellman key exchange during the following interactions. The public key—an element of the main group can be encoded by an element or elements of Z q that are transmitted in clear from S to D during the first interaction between S and D . These elements are:
  • Scheme BS3 from [19]: t Z q , y Z q selected by S in BS 3 . S 1 and revealed to D as the output of BS 3 . S 2 (compare page 22 of [19]);
  • Scheme BS from [34]: b , y Z q selected by S during the first step of BS . ISign and revealed to D in the last message of this procedure (compare page 13 of [34]);
  • Scheme BSr3 from [34]: b , y Z q selected by S during the first step of BSr 3 3 . ISign and revealed to D in the third message (compare page 14 of [34]);
  • Scheme BS1 from [35]: z 1 , e Z q selected by S during BS 1 . S 1 . They are transferred to D in the final step of BS 1 . S 2 (compare page 10 of [35]);
  • Scheme BS2 from [35]: z 1 , e Z q selected by S during BS 2 . S 1 . They are transferred to D in the final step of BS 2 . S 3 (compare page 19 of [35]);
  • Scheme BS3 from [35]: z 1 , e Z q , crnd S ¯ , crnd R ¯ Z q 2 (so, together, 6 elements of Z q !); they are selected by S during the execution of BS 3 . S 1 and revealed in the message sent by S in the last step of BS 3 . S 2 (compare page 34 of [35]).

5.4.2. Linkable Signature Creation Details

Similarly as before, we use random elements of Z q from the original scheme to carry the covert data. The elements are specific to the schemes, as well as their recovery when a signature is compared with an interaction transcript: The random elements used are:
  • Scheme BS3 from [19]: r 1 , r 2 , γ 1 Z p , γ 2 Z p selected by D during execution of BS 3 . U 1 (compare page 22 of [19]). They can be recalculated by S as follows:
    γ 1 = y / y mod q (compare BS 3 . U 2 );
    γ 2 = c / c mod q (compare BS 3 . U 1 ));
    r 1 = s ( γ 1 / γ 2 ) · s mod q (compare BS 3 . U 2 );
    r 2 = t γ 1 · t mod q (compare BS 3 . U 2 ).
    Note that c , s , y , t are the components of the signature compared with an interaction with messages containing s , y , t (message sent by S as the output of BS 3 . S 2 ) and c (message sent by D as the output of BS 3 . ISign ).
  • Scheme BS from [34]: α , β , r Z q (in some cases, Z q ) selected by D (compare BS [ f 1 ] . USign 1 and BS [ f 2 ] . USign 1 on page 13 of [34]). They can be recovered by S as follows (compare BS [ f 1 ] . USign 1 , BS [ f 1 ] . USign 2 and BS [ f 2 ] . USign 1 , BS [ f 2 ] . USign 2 on page 13 of [34]):
    α = y ¯ / y mod q ,
    β = c / c ¯ mod q (the version with f 2 ) or β = c c ¯ α 5 mod q (the version with f 1 ),
    r = z ¯ α β 1 z α b mod q (the version with f 2 ) or r = z ¯ α 5 z α b mod q (the version with f 2 ),
  • Scheme BSr3 from [34]: the elements β , s Z q . Note that random s is sent in clear during blind-signature creation so it can be used to calculate the shared ephemeral secret used to derive β . In turn, β can be derived by S as c / c ¯ mod q , where c ¯ is recalculated according to the regular verification procedure (compare page 14 of [34]);
  • Scheme BS1 from [35]: the elements α 0 , α 1 , γ 0 , γ 1 Z q selected freely by D during BS 1 . U 2 . They can be recalculated by S during the linking test as: γ 0 = d d mod q , γ 1 = e e mod q , α 0 = z 0 z 0 mod q , α 1 = z 1 z 1 mod q (recall that d , e , z 0 , z 1 are components of the blind signature, while d , e , z 0 , z 1 are transmitted by S in the last step of BS 1 . S 2 (compare page 10 of [35]);
  • Scheme BS2 from [35]: the elements α 0 , α 1 , γ 0 , γ 1 Z q selected by D during BS 2 . U 1 (compare page 19 of [35]). They can be recalculated by S during the linking test as: γ 0 = d d mod q , γ 1 = e e mod q , α 0 = z 0 z 0 mod q , α 1 = z 1 z 1 mod q . Recall that d , e , z 0 , z 1 are components of the blind signature, while d , e , z 0 , z 1 are transmitted by S in the last step of BS 2 . S 3 .
  • Scheme BS3 from [35]: the elements α 1 , γ 0 , γ 1 Z q and δ S , δ R Z q 2 selected by D during the execution of BS 3 . U 2 (compare page 34 of [35]). They can be recalculated by S during the linking test as:
    α 1 = z 1 z 1 mod q .
    γ 0 = d d mod q ;
    γ 1 = e e mod q ;
    δ S = crnd S ¯ crnd S ¯ mod Z q (these are component-wise calculations on vectors of length 2);
    δ R = crnd R ¯ crnd R ¯ γ 0 · crnd S ¯ mod Z q (again, these are vector operations)
    (compare BS 3 . U 3 on page 34 of [35]). Note that these calculations can be performed by S as d , e , z 1 , crnd S ¯ , crnd R ¯ are the components of the final signature (see the output of BS 3 . U 3 ), while d , e , z 1 , crnd S ¯ , crnd R ¯ are contained in the message sent by S in the last step of BS 3 . S 2 .
We see that except for BSr3, there are at least three elements of Z q available for the attack. So, apart from linking, one can transmit at least two hidden messages. In the case of BS3 from CRYPTO’24, the scheme most attractive from the point of view of formal security analysis provided in [35], there is room for six messages.

5.5. Countermeasures

Similarly as for the blind Schnorr signature scheme, a properly implemented watchdog can mask the values sent on the wire so that no subliminal channel in the signing procedure can be used for the linking attack. The watchdog is presented on Figure 15.
Let us check that the output on the side of D is a valid Tessaro–Zhu signature. The key point is to check that
A = g s · pk S c · y
as the right-hand side expression replaces A under the hash during the verification procedure. Recall that
A = g r 1 · A γ · Y γ · r 2 A = g r 1 · ( A ) γ · ( Y ) γ · r 2 = g r 1 · ( g r 1 · A γ · Y γ · r 2 ) γ · Y γ · γ · r 2 = g r 1 + r 1 · γ · A γ · γ · Y γ · γ · ( r 2 + r 2 )
On the other hand,
g s · pk S c · y = g s · γ + r 1 · pk S c · y · γ = g ( s · γ + r 1 ) · γ + r 1 · pk S c · y · γ · γ = g ( ( a + c · y · sk S ) · γ + r 1 ) · γ + r 1 · pk S c · y · γ · γ = g r 1 + r 1 · γ · A γ · γ · pk S c · y · γ · γ = g r 1 + r 1 · γ · A γ · γ · pk S c · y · γ · γ · pk S c · y · γ · γ = g r 1 + r 1 · γ · A γ · γ · pk S c · y · γ · γ · pk S ( c r 2 r 2 ) · y · γ · γ = g r 1 + r 1 · γ · A γ · γ · pk S ( r 2 + r 2 ) · y · γ · γ = g r 1 + r 1 · γ · A γ · γ · Y ( r 2 + r 2 ) · γ · γ = A
For the constructed watchdog, we can show a result similar to Theorem 1.
Theorem 2.
Consider invocations Π 1 , Π 2 of blind Tessaro–Zhu signature creation executed by S with devices D 1 , D 2 (where D 1 and D 2 might be the same). Let the messages seen by S during the execution of Π 1 be A 1 , Y 1 , c 1 , s 1 , y 1 . Let A 2 , Y 2 , c 2 , s 2 , y 2 be the messages seen by D 2 during the execution of Π 2 . Then, there is a valid execution of the signing procedure with the watchdog, where the messages exchanged are A 1 , Y 1 , c 1 , s 1 , y 1 from the point of view of S , and A 2 , Y 2 , c 2 , s 2 , y 2 from the point of view of D 2 .
Proof. 
We have to show that there are the blinding factors r 1 , r 2 , γ that can be used to “couple” both executions. Since, according to the protocol specification c = c + r 2 mod q , we immediately obtain the value for r 2 = c 1 c 2 mod q . Similarly, since y = y · γ mod q , we set γ = y 2 / y 1 mod q . The value r 1 should satisfy the equality
A 2 = g r 1 · A 1 γ · ( Y 2 ) r 2
In the above equality, all values are fixed apart from r 1 . As g is the group generator, there is a unique r 1 < q such that g r 1 = A 2 · A 1 γ · ( Y 2 ) r 2 .
So it remains to check that s 2 = s 1 · γ + r 1 . As the group has prime order, it suffices to show that g s 2 = g s 1 · γ + r 1 . Before we start, let us recall that
s 1 = a 1 + c 1 · y 1 · sk S mod q , s 2 = a 2 + c 2 · y 2 · sk S mod q .
Hence, also
g s 1 = A 1 · pk S c 1 · y 1 , g s 2 = A 2 · pk S c 2 · y 2 .
By substitutions and reductions, we obtain the following equivalent equalities:
g s 2 = ? g s 1 · γ + r 1 A 2 · pk S c 2 · y 2 = ? ( A 1 · pk S c 1 · y 1 ) γ · g r 1 g r 1 · A 1 γ · ( Y 2 ) r 2 · pk S c 2 · y 2 = ? A 1 γ · pk S c 1 · y 1 · γ · g r 1 ( Y 2 ) r 2 · pk S c 2 · y 2 = ? pk S c 1 · y 1 · γ pk S y 2 · r 2 · pk S ( c 1 r 2 ) · y 2 = ? pk S c 1 · y 1 · γ pk S c 1 · y 2 = ? pk S c 1 · y 1 · γ pk S c 1 · y 1 · γ = ? pk S c 1 · y 1 · γ
As the last equality is true, so is the first, and both points of view can originate from the same computation. □

6. Conclusions

Let us start the conclusions by addressing Problem 1. We have shown that blind-signature schemes are vulnerable to malicious implementation setups that invalidate privacy properties of blind signatures while being undetectable from the point of view of the Auditor. Subliminal channels exist in each one of the discussed protocols, and in case of a malicious, black-box implementation, allow for breaking the blindness property and for sending covert messages. We argue that these attacks are easily applicable in practice.
Given that, we believe that blind-signature schemes should be implemented on the client side with great care. Answering the second part of Problem 1, we have presented a solution for preventing the aforementioned attacks in a pragmatic way. This problem can be solved by a properly implemented, transparent watchdog, provided the user can trust the watchdog not to collude with the client’s device. This can be achieved by implementing the watchdog as a publicly auditable, open-source software. We have presented formal proofs of watchdog’s effectiveness. The only remaining problem not solved by the watchdog is rejection sampling, which may still leak a few bits per signature. This is because the watchdog only stops the subliminal channels that are contained within the singing procedure, i.e., in values ( R , c , s ) exchanged between D and S , as per Theorems 1 and 2. Then, a subliminal channel can still exist in the unblinded signature ( c , s ) or even in the signed message m itself. While we do not have any solution that would not require modifying the schemes, we argue that, according to current knowledge (e.g., [27]), rejection sampling is in fact the only way to abuse the channel in the signature ( c , s ) due to both of its random elements being a result of a one-way function of sorts. The anamorphic channel in m can only exist in use cases where the signed message is a random value chosen by the device (e.g., a random token as in Privacy Pass [3]). It can only be stopped by carefully designing the implementation to not allow any randomness in messages. Having all of this in mind, watchdogs seem to be the optimal solution, especially given their transparency—there is no need to modify existing blind-signature implementations.
For future work, similar analysis should be provided for other blind-signature schemes, e.g., blind RSA signatures, privately verifiable tokens based on verifiable oblivious pseudorandom functions (used in Privacy Pass) and post-quantum schemes. What is more, any work that improves rejection sampling will also improve the practicality of using it to break blind-signature schemes. Currently, one would require a large number of signatures to make it work (e.g., 50–100 signatures to establish a 256-bit Diffie–Hellman anamorphic key ak ). Reducing this number to, say, 10 signatures could make the attack much more practical.

Author Contributions

Conceptualization, M.K. and O.S.; methodology, M.K. and O.S.; validation, M.K. and O.S.; formal analysis, M.K. and O.S.; investigation, M.K. and O.S.; writing—original draft preparation, M.K. and O.S.; writing—review and editing, M.K. and O.S.; visualization, M.K. and O.S.; supervision, M.K.; project administration, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PETPrivacy-Enhancing Technology
RFCRequest For Comments
RSARivest–Shamir–Adleman (cryptosystem)
VPNVirtual Private Network
EDIWEuropean Digital Identity Wallet
GDPRGeneral Data Protection Regulation
DHDiffie–Hellman (key exchange protocol)
DDHDecisional Diffie–Hellman (assumption)
SSISelf-Sovereign Identity
PRNGPseudorandom Number Generator
FDHFull Domain Hash
KDFKey Derivation Function
ROSRandom inhomogeneities in a Overdetermined Solvable system of linear equations (problem)
Proc.Procedure

References

  1. Chaum, D. Blind Signatures for Untraceable Payments. In Proceedings of the Advances in Cryptology—CRYPTO’82, Santa Barbara, CA, USA, 23–25 August 1982; Chaum, D., Rivest, R.L., Sherman, A.T., Eds.; Plenum Press: New York, NY, USA, 1983; pp. 199–203. [Google Scholar] [CrossRef]
  2. Baldimtsi, F.; Lysyanskaya, A. Anonymous credentials light. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS ’13, Berlin, Germany, 4–8 November 2013; Sadeghi, A.-R., Gligor, V.D., Yung, M., Eds.; ACM: New York, NY, USA, 2013; pp. 1087–1098. [Google Scholar] [CrossRef]
  3. Davidson, A.; Goldberg, I.; Sullivan, N.; Tankersley, G.; Valsorda, F.; Holloway, R. Privacy Pass: Bypassing Internet Challenges Anonymously. Proc. Priv. Enhancing Technol. 2018, 2018, 164–180. [Google Scholar] [CrossRef]
  4. Google. How the VPN by Google One Works | Google One. 2024. Available online: https://web.archive.org/web/20231226211122/https://one.google.com/about/vpn/howitworks (accessed on 3 January 2024).
  5. Apple. iCloud Private Relay Overview; Technical Report; Apple: Cupertino, CA, USA, 2021; Available online: https://www.apple.com/icloud/docs/iCloud_Private_Relay_Overview_Dec2021.pdf (accessed on 20 January 2025).
  6. Schmitt, P.; Raghavan, B. Pretty Good Phone Privacy. In Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Online, 11–13 August 2021; Michael, D., Bailey, M.D., Greenstadt, R., Eds.; pp. 1737–1754. Available online: https://www.usenix.org/conference/usenixsecurity21/presentation/schmitt (accessed on 20 January 2025).
  7. European Parliament and the Council. Regulation (EU) 2024/1183 of the European Parliament and of the Council. 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1183/oj (accessed on 4 March 2025).
  8. Baum, C.; Blazy, O.; Camenisch, J.; Hoepman, J.H.; Lee, E.; Lehmann, A.; Lysyanskaya, A.; Mayrhofer, R.; Montgomery, H.; Nguyen, N.K.; et al. Cryptographers’ Feedback on the EU Digital Identity’s ARF. 2024. Available online: https://github.com/eu-digital-identity-wallet/eudi-doc-architecture-and-reference-framework/discussions/211 (accessed on 20 January 2025).
  9. European Parliament and the Council. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA Relevance). 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng (accessed on 20 January 2025).
  10. Apple; Google. Exposure Notification Privacy-Preserving Analytics (ENPA) White Paper. 2021. Available online: https://covid19-static.cdn-apple.com/applications/covid19/current/static/contact-tracing/pdf/ENPA_White_Paper.pdf (accessed on 20 January 2025).
  11. Nemec, M.; Sýs, M.; Svenda, P.; Klinec, D.; Matyas, V. The Return of Coppersmith’s Attack: Practical Factorization of Widely Used RSA Moduli. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, Dallas, TX, USA, 30 October–3 November 2017; Thuraisingham, B., Evans, D., Malkin, T., Xu, D., Eds.; ACM: New York, NY, USA, 2017; pp. 1631–1648. [Google Scholar] [CrossRef]
  12. Simmons, G.J. The Prisoners’ Problem and the Subliminal Channel. In Proceedings of the Advances in Cryptology, Proceedings of CRYPTO ’83, Santa Barbara, CA, USA, 21–24 August 1983; Chaum, D., Ed.; Plenum Press: New York, NY, USA, 1983; pp. 51–67. [Google Scholar] [CrossRef]
  13. Young, A.L.; Yung, M. Malicious Cryptography—Exposing Cryptovirology; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  14. Persiano, G.; Phan, D.H.; Yung, M. Anamorphic Encryption: Private Communication Against a Dictator. In Proceedings of the Advances in Cryptology —EUROCRYPT 2022, Trondheim, Norway, 30 May–3 June 2022; Proceedings, Part II. Dunkelman, O., Dziembowski, S., Eds.; Springer: Cham, Switzerland, 2022. Lecture Notes in Computer Science. Volume 13276, pp. 34–63. [Google Scholar] [CrossRef]
  15. Kutyłowski, M.; Lauks-Dutka, A.; Kubiak, P.; Zawada, M. FIDO2 Facing Kleptographic Threats By-Design. Appl. Sci. 2024, 14, 11371. [Google Scholar] [CrossRef]
  16. Kubiak, P.; Kutyłowski, M. Supervised Usage of Signature Creation Devices. In Proceedings of the Information Security and Cryptology INSCRYPT’2013, Guangzhou, China, 27–30 November 2013; Lin, D., Xu, S., Yung, M., Eds.; Springer: Cham, Switzerland, 2013. Lecture Notes in Computer Science. Volume 8567, pp. 132–149. [Google Scholar] [CrossRef]
  17. Chaum, D.; Pedersen, T.P. Wallet Databases with Observers. In Proceedings of the Advances in Cryptology, Proceedings of CRYPTO ’92, Santa Barbara, CA, USA, 16–20 August 1992; Brickell, E.F., Ed.; Springer: Berlin/Heidelberg, Germany, 1992. Lecture Notes in Computer Science. Volume 740, pp. 89–105. [Google Scholar] [CrossRef]
  18. Schnorr, C.P. Efficient Signature Generation by Smart Cards. J. Cryptol. 1991, 4, 161–174. [Google Scholar] [CrossRef]
  19. Tessaro, S.; Zhu, C.; Allen, P.G. Short Pairing-Free Blind Signatures with Exponential Security. In Proceedings of the Advances in Cryptology – EUROCRYPT 2022, Trondheim, Norway, 30 May–3 June 2022; Proceedings, Part II. Dunkelman, O., Dziembowski, S., Eds.; Springer: Cham, Switzerland, 2022. Lecture Notes in Computer Science. Volume 13276, pp. 782–811. [Google Scholar] [CrossRef]
  20. Chairattana-Apirom, R.; Tessaro, S.; Zhu, C. Pairing-Free Blind Signatures from CDH Assumptions. In Proceedings of the Advances in Cryptology—CRYPTO 2024, Santa Barbara, CA, USA, 18–22 August 2024; Proceedings, Part I. Stebila, D., Ed.; Springer: Cham, Switzerland, 2024. Lecture Notes in Computer Science. Volume 14920, pp. 174–209. [Google Scholar] [CrossRef]
  21. Crites, E.C.; Komlo, C.; Maller, M.; Tessaro, S.; Zhu, C. Snowblind: A Threshold Blind Signature in Pairing-Free Groups. In Proceedings of the Advances in Cryptology—CRYPTO 2023, Santa Barbara, CA, USA, 20–24 August 2023; Proceedings, Part I. Handschuh, H., Lysyanskaya, A., Eds.; Springer: Cham, Switzerland, 2023. Lecture Notes in Computer Science. Volume 14081, pp. 710–742. [Google Scholar] [CrossRef]
  22. Juels, A.; Luby, M.; Ostrovsky, R. Security of blind digital signatures. In Proceedings of the Advances in Cryptology—CRYPTO ’97, Santa Barbara, CA, USA, 17–21 August 1997; Kaliski, B.S., Ed.; Springer: Berlin/Heidelberg, Germany, 1997. Lecture Notes in Computer Science. Volume 1294, pp. 150–164. [Google Scholar] [CrossRef]
  23. Pointcheval, D.; Stern, J. Provably secure blind signature schemes. In Proceedings of the Advances in Cryptology—ASIACRYPT ’96, Kyongju, Republic of Korea, 3–7 November 1996; Kim, K., Matsumoto, T., Eds.; Springer: Berlin/Heidelberg, Germany, 1996. Lecture Notes in Computer Science. Volume 1163, pp. 252–265. [Google Scholar] [CrossRef]
  24. Hanzlik, L. Non-interactive Blind Signatures for Random Messages. In Proceedings of the Advances in Cryptology—EUROCRYPT 2023, Lyon, France, 23–27 April 2023; Proceedings, Part V. Hazay, C., Stam, M., Eds.; Springer: Cham, Switzerland, 2023. Lecture Notes in Computer Science. Volume 14008, pp. 722–752. [Google Scholar] [CrossRef]
  25. Baldimtsi, F.; Cheng, J.; Goyal, R.; Yadav, A. Non-Interactive Blind Signatures: Post-Quantum and Stronger Security. In Proceedings of the Advances in Cryptology—ASIACRYPT 2024, Kolkata, India, 9–13 December 2024; Proceedings, Part II. Chung, K.M., Sasaki, Y., Eds.; Springer: Singapore, 2025. Lecture Notes in Computer Science. Volume 15485, pp. 70–104. [Google Scholar] [CrossRef]
  26. Kastner, J.; Loss, J.; Xu, J. The Abe-Okamoto Partially Blind Signature Scheme Revisited. In Proceedings of the Advances in Cryptology—ASIACRYPT 2022, Taipei, Taiwan, 5–9 December 2022; Proceedings, Part IV. Agrawal, S., Lin, D., Eds.; Springer: Cham, Switzerland, 2022. Lecture Notes in Computer Science. Volume 13794, pp. 279–309. [Google Scholar] [CrossRef]
  27. Kutyłowski, M.; Persiano, G.; Phan, D.H.; Yung, M.; Zawada, M. Anamorphic Signatures: Secrecy from a Dictator Who Only Permits Authentication! In Proceedings of the Advances in Cryptology—CRYPTO 2023, Santa Barbara, CA, USA, 20–24 August 2023; Handschuh, H., Lysyanskaya, A., Eds.; Springer: Cham, Switzerland, 2023. Lecture Notes in Computer Science. Volume 14082, pp. 759–790. [Google Scholar] [CrossRef]
  28. Russell, A.; Tang, Q.; Yung, M.; Zhou, H.S. Generic Semantic Security against a Kleptographic Adversary. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, Dallas, TX, USA, 30 October–3 November 2017; Thuraisingham, B., Evans, D., Malkin, T., Xu, D., Eds.; ACM: New York, NY, USA, 2017; pp. 907–922. [Google Scholar] [CrossRef]
  29. Bemmann, P.; Chen, R.; Jager, T. Subversion-Resilient Public Key Encryption with Practical Watchdogs. In Proceedings of the Public-Key Cryptography—PKC 2021, Virtual, 10–13 May 2021; Garay, J.A., Ed.; Springer: Cham, Switzerland, 2021. Lecture Notes in Computer Science. Volume 12710, pp. 627–658. [Google Scholar] [CrossRef]
  30. Schnorr, C.P. Security of Blind Discrete Log Signatures against Interactive Attacks. In Proceedings of the Information and Communications Security, Xi’an, China, 13–16 November 2001; Qing, S., Okamoto, T., Zhou, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2001. Lecture Notes in Computer Science. Volume 2229, pp. 1–12. [Google Scholar] [CrossRef]
  31. Benhamouda, F.; Lepoint, T.; Loss, J.; Orrù, M.; Raykova, M. On the (in)Security of ROS. In Proceedings of the Advances in Cryptolog—EUROCRYPT 2021, Zagreb, Croatia, 17–21 October 2021; Proceedings, Part I. Canteaut, A., Standaert, F., Eds.; Springer: Cham, Switzerland, 2021. Lecture Notes in Computer Science. Volume 12696, pp. 33–53. [Google Scholar] [CrossRef]
  32. Bernstein, D.J.; Hamburg, M.; Krasnova, A.; Lange, T. Elligator: Elliptic-Curve Points Indistinguishable from Uniform Random Strings. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS ’13, Berlin, Germany, 4–8 November 2013; Sadeghi, A.-R., Gligor, V.D., Yung, M., Eds.; ACM: New York, NY, USA, 2013; pp. 967–980. [Google Scholar] [CrossRef]
  33. Goyal, V.; O’Neill, A.; Rao, V. Correlated-Input Secure Hash Functions. In Proceedings of the Theory of Cryptography—8th Theory of Cryptography Conference, TCC 2011, Providence, RI, USA, 28–30 March 2011; Proceedings. Ishai, Y., Ed.; Springer: Berlin/Heidelberg, Germany, 2011. Lecture Notes in Computer Science. Volume 6597, pp. 182–200. [Google Scholar] [CrossRef]
  34. Crites, E.C.; Komlo, C.; Maller, M.; Tessaro, S.; Zhu, C. Snowblind: A Threshold Blind Signature in Pairing-Free Groups. IACR Cryptol. ePrint Arch. 2023, 14081, 1228. [Google Scholar]
  35. Chairattana-Apirom, R.; Tessaro, S.; Zhu, C. Pairing-Free Blind Signatures from CDH Assumptions. IACR Cryptol. ePrint Arch. 2023, 14920, 1780. [Google Scholar]
Figure 1. User authentication before the creation of a blind signature.
Figure 1. User authentication before the creation of a blind signature.
Applsci 15 02864 g001
Figure 2. Blind signing process.
Figure 2. Blind signing process.
Applsci 15 02864 g002
Figure 3. Signature verification.
Figure 3. Signature verification.
Applsci 15 02864 g003
Figure 4. Auditor inspecting S and D . Boxes with “???” represent unknown algorithms that the parties might run.
Figure 4. Auditor inspecting S and D . Boxes with “???” represent unknown algorithms that the parties might run.
Applsci 15 02864 g004
Figure 5. Blind Schnorr signature-creation procedure SignS , SignU and verification procedure Verify .
Figure 5. Blind Schnorr signature-creation procedure SignS , SignU and verification procedure Verify .
Applsci 15 02864 g005
Figure 6. Protocol transmitting an ephemeral share R for DH key exchange from S to D .
Figure 6. Protocol transmitting an ephemeral share R for DH key exchange from S to D .
Applsci 15 02864 g006
Figure 7. Protocol transmitting an ephemeral share Y for DH key exchange from D to S .
Figure 7. Protocol transmitting an ephemeral share Y for DH key exchange from D to S .
Applsci 15 02864 g007
Figure 8. Protocol for creating a linkable signature and unblinding it.
Figure 8. Protocol for creating a linkable signature and unblinding it.
Applsci 15 02864 g008
Figure 9. Unblinding the message signed during blind-signature creation.
Figure 9. Unblinding the message signed during blind-signature creation.
Applsci 15 02864 g009
Figure 10. Protocol for creating a linkable signature based on hidden public key R 0 , In lines 3 and 4, the operation encode maps the elements of the group to Z q .
Figure 10. Protocol for creating a linkable signature based on hidden public key R 0 , In lines 3 and 4, the operation encode maps the elements of the group to Z q .
Applsci 15 02864 g010
Figure 11. Linking a signature ( c , s ) with a transcript ( R , c , s ) in case of the algorithm based on the public key R 0 . Dec r 0 denotes ElGamal decryption with private key r 0 .
Figure 11. Linking a signature ( c , s ) with a transcript ( R , c , s ) in case of the algorithm based on the public key R 0 . Dec r 0 denotes ElGamal decryption with private key r 0 .
Applsci 15 02864 g011
Figure 12. Schnorr blind-signature creation procedure with a watchdog SignS , Watch , SignU .
Figure 12. Schnorr blind-signature creation procedure with a watchdog SignS , Watch , SignU .
Applsci 15 02864 g012
Figure 13. Partial identity disclosure with rejection sampling: I D is a k-bit identifier of D , K A is a secret key shared with an adversary, k is a (very) small integer.
Figure 13. Partial identity disclosure with rejection sampling: I D is a k-bit identifier of D , K A is a secret key shared with an adversary, k is a (very) small integer.
Applsci 15 02864 g013
Figure 14. Tessaro–Zhu scheme BS1 from [19]: signature creation procedure SignS , SignU and verification procedure Verify .
Figure 14. Tessaro–Zhu scheme BS1 from [19]: signature creation procedure SignS , SignU and verification procedure Verify .
Applsci 15 02864 g014
Figure 15. Tessaro–Zhu blind-signature creation procedure (BS1 from [19]) with a watchdog SignS , Watch , SignU .
Figure 15. Tessaro–Zhu blind-signature creation procedure (BS1 from [19]) with a watchdog SignS , Watch , SignU .
Applsci 15 02864 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kutyłowski, M.; Sobolewski, O. Privacy Illusion: Subliminal Channels in Schnorr-like Blind-Signature Schemes. Appl. Sci. 2025, 15, 2864. https://doi.org/10.3390/app15052864

AMA Style

Kutyłowski M, Sobolewski O. Privacy Illusion: Subliminal Channels in Schnorr-like Blind-Signature Schemes. Applied Sciences. 2025; 15(5):2864. https://doi.org/10.3390/app15052864

Chicago/Turabian Style

Kutyłowski, Mirosław, and Oliwer Sobolewski. 2025. "Privacy Illusion: Subliminal Channels in Schnorr-like Blind-Signature Schemes" Applied Sciences 15, no. 5: 2864. https://doi.org/10.3390/app15052864

APA Style

Kutyłowski, M., & Sobolewski, O. (2025). Privacy Illusion: Subliminal Channels in Schnorr-like Blind-Signature Schemes. Applied Sciences, 15(5), 2864. https://doi.org/10.3390/app15052864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop