All papers (22128 results)

Last updated:  2024-05-31
Ascon-Keccak AEAD Algorithm
Stephan Müller
The Ascon specification defines among others an encryption scheme offering authenticated encryption with associated data (AEAD) which is based on a duplex mode of a sponge. With that it is the first of such algorithm selected and about to be standardized by NIST. The sponge size is comparatively small, 320 bits, as expected for lightweight cryptography. With that, the strength of the defined AEAD algorithm is limited to 128 bits. Albeit, the definition of the Ascon AEAD algorithm integrates with the associated sponge, it is mathematically not bound to exactly this sponge function. Thus, the Ascon AEAD specification can be used with a different sponge and still operate as defined by the Ascon authors. This specification defines the Ascon-Keccak AEAD algorithm which replaces the Ascon sponge with the Keccak sponge, leaving the Ascon AEAD algorithm unchanged. The selected parameters for Ascon-Keccak AEAD offer two algorithm strengths: Ascon-Keccak 256 with a classic security strength of 256 bits and a quantum security strength of 128 bits. In addition, Ascon-Keccak 512 provides an algorithm with 512 bit classic security strength and 256 bit quantum security strength. The selected parameters for Ascon-Keccak 256 offer a significant higher performance on 64-bit architectures than Ascon-128 and Ascon-128a. The performance of Ascon-Keccak 512 is in league with Ascon-128. Yet, with the Keccak sponge size of 1600 bits, Ascon-Keccak cannot be considered a lightweight cryptographic algorithm any more. A reference implementation of the algorithm is provided as referenced in the document.
Last updated:  2024-05-31
Speeding up Preimage and Key-Recovery Attacks with Highly Biased Differential-Linear Approximations
Zhongfeng Niu, Kai Hu, Siwei Sun, Zhiyu Zhang, and Meiqin Wang
We present a framework for speeding up the search for preimages of candidate one-way functions based on highly biased differential-linear distinguishers. It is naturally applicable to preimage attacks on hash functions. Further, a variant of this framework applied to keyed functions leads to accelerated key-recovery attacks. Interestingly, our technique is able to exploit related-key differential-linear distinguishers in the single-key model without querying the target encryption oracle with unknown but related keys. This is in essence similar to how we speed up the key search based on the well known complementation property of DES, which calls for caution from the designers in building primitives meant to be secure in the single-key setting without a thorough cryptanalysis in the related-key model. We apply the method to sponge-based hash function Ascon-HASH, XOFs XOEsch/Ascon-XOF and AEAD Schwaemm, etc. Accelerated preimage or key-recovery attacks are obtained. Note that all the differential-linear distinguishers employed in this work are highly biased and thus can be experimentally verified.
Last updated:  2024-05-31
Indistinguishability Obfuscation from Bilinear Maps and LPN Variants
Seyoon Ragavan, Neekon Vafa, and Vinod Vaikuntanathan
We construct an indistinguishability obfuscation (IO) scheme from the sub-exponential hardness of the decisional linear problem on bilinear groups together with two variants of the learning parity with noise (LPN) problem, namely large-field LPN and (binary-field) sparse LPN. This removes the need to assume the existence pseudorandom generators (PRGs) in $\mathsf{NC}^0$ with polynomial stretch from the state-of-the-art construction of IO (Jain, Lin, and Sahai, EUROCRYPT 2022). As an intermediate step in our construction, we abstract away a notion of structured-seed polynomial-stretch PRGs in $\mathsf{NC}^0$ which suffices for IO and is implied by both sparse LPN and the existence of polynomial-stretch PRGs in $\mathsf{NC}^0$. As immediate applications, from the sub-exponential hardness of the decisional linear assumption on bilinear groups, large-field LPN, and sparse LPN, we get alternative constructions of (a) fully homomorphic encryption (FHE) without lattices or circular security assumptions (Canetti, Lin, Tessaro, and Vaikuntanathan, TCC 2015), and (b) perfect zero-knowledge adaptively-sound succinct non-interactive arguments (SNARGs) for NP (Waters and Wu, STOC 2024).
Last updated:  2024-05-30
Securing the Future of GenAI: Policy and Technology
Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, and Matthew Turek
The rise of Generative AI (GenAI) brings about transformative potential across sectors, but its dual-use nature also amplifies risks. Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety. China, the United States (US), and the European Union (EU) are at the forefront with initiatives like the Management of Algorithmic Recommendations, the Executive Order, and the AI Act, respectively. However, the rapid evolution of GenAI capabilities often outpaces the development of comprehensive safety measures, creating a gap between regulatory needs and technical advancements. A workshop co-organized by Google, University of Wisconsin, Madison (UW-Madison), and Stanford University aimed to bridge this gap between GenAI policy and technology. The diverse stakeholders of the GenAI space—from the public and governments to academia and industry—make any safety measures under consideration more complex, as both technical feasibility and regulatory guidance must be realized. This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress? How technology can evolve to meet regulatory standards? The interplay between legislation and technology is a very vast topic, and we don’t claim that this paper is a comprehensive treatment on this topic. This paper is meant to capture findings based on the workshop, and hopefully, can guide discussion on this topic.
Last updated:  2024-05-30
Simulation-Extractable KZG Polynomial Commitments and Applications to HyperPlonk
Benoit Libert
HyperPlonk is a recent SNARK proposal (Eurocrypt'23) that features a linear-time prover and supports custom gates of larger degree than Plonk. For the time being, its instantiations are only proven to be knowledge-sound (meaning that soundness is only guaranteed when the prover runs in isolation) while many applications motivate the stronger notion of simulation-extractability (SE). Unfortunately, the most efficient SE compilers are not immediately applicable to multivariate polynomial interactive oracle proofs. To address this problem, we provide an instantiation of HyperPlonk for which we can prove simulation-extractability in a strong sense. As a crucial building block, we describe KZG-based commitments to multivariate polynomials that also provide simulation-extractability while remaining as efficient as malleable ones. Our proofs stand in the combined algebraic group and random oracle model and ensure straight-line extractability (i.e., without rewinding).
Last updated:  2024-05-30
Practical q-IND-CPA-D-Secure Approximate Homomorphic Encryption
Jean-Philippe Bossuat, Anamaria Costache, Christian Mouchet, Lea Nürnberger, and Juan Ramón Troncoso-Pastoriza
At Eurocrypt $2021$, Li and Micciancio demonstrated that the IND-CPA notion of security is not sufficient to cover the passive security of approximate homomorphic encryption schemes, by outlining a key recovery attack against the CKKS scheme (Cheon, Kim, Kim, Seong, Asiacrypt $2017$). They proposed the notion of $q$-IND-CPA-D security, which allows an adversary to make $q$ calls to a restricted decryption oracle. Li and Micciancio left achieving $q$-IND-CPA-D security as an open problem, but proposed two approaches: noise flooding and an exact version of CKKS. The first approach was addressed by Li, Micciancio, Schultz and Sorrell (Crypto 2022), but leads to substantial efficiency loss. In this work, we look at the second approach. We define $(\delta, r)$-exact CKKS, a version of CKKS that returns exact results on all except the least $r$ significant bits with (high) probability $\delta$, based on bounds on the noise. We prove that the advantage of a $q$-IND-CPA-D attacker against $(\delta, r)$-exact CKKS is determined by the failure probability of those bounds. We conduct a tight average-case and implementation-specific noise analysis of all elementary operations in CKKS, as implemented in the Lattigo library, including the bootstrapping operation. We propose bounds that have small enough failure probability for the advantage of a $q$-IND-CPA-D attacker against $(\delta,r)$-exact CKKS to become smaller than $2^{-128}$, while the parameter sets needed remain practical. We furthermore present an estimator tool that combines the bounds on basic operations and returns tight noise estimates, even for large circuits. We validate our bounds by showcasing experimental results on different iterative algorithms, homomorphic encoding, decoding and bootstrapping.
Last updated:  2024-05-30
Breaking Indistinguishability with Transfer Learning: A First Look at SPECK32/64 Lightweight Block Ciphers
Jimmy Dani, Kalyan Nakka, and Nitesh Saxena
In this research, we introduce MIND-Crypt, a novel attack framework that uses deep learning (DL) and transfer learning (TL) to challenge the indistinguishability of block ciphers, specifically SPECK32/64 encryption algorithm in CBC mode (Cipher Block Chaining) against Known Plaintext Attacks (KPA). Our methodology includes training a DL model with ciphertexts of two messages encrypted using the same key. The selected messages have the same byte-length and differ by only one bit at the binary level. This DL model employs a residual network architecture. For the TL, we use the trained DL model as a feature extractor, and these features are then used to train a shallow machine learning, such as XGBoost. This dual strategy aims to distinguish ciphertexts of two encrypted messages, addressing traditional cryptanalysis challenges. Our findings demonstrate that the deep learning model achieves an accuracy of approximately 99% under consistent cryptographic conditions (Same Key or Rounds) with the SPECK32/64 cipher. However, performance degrades to random guessing levels (50%) when tested with ciphertext generated from different keys or different encryption rounds of SPECK32/64. To enhance the results, the DL model requires retraining with different keys or encryption rounds using larger datasets ($10^{7}$ samples). To overcome this limitation, we implement TL, achieving an accuracy of about 53% with just 10,000 samples, which is better than random guessing. Further training with 580,000 samples increases accuracy to nearly 99%, showing a substantial reduction in data requirements by over 94%. This shows that an attacker can utilize machine learning models to break indistinguishability by accessing pairs of plaintexts and their corresponding ciphertexts encrypted with the same key, without directly interacting with the communicating parties.
Last updated:  2024-05-30
On the parallelization of square-root Vélu's formulas
Jorge Chávez-Saab, Odalis Ortega, and Amalia Pizarro-Madariaga
A primary challenge in isogeny-based cryptography lies in the substantial computational cost associated to computing and evaluating prime-degree isogenies. This computation traditionally relied on Vélu's formulas, an approach with time complexity linear in the degree but which was further enhanced by Bernstein, De Feo, Leroux, and Smith to a square-root complexity. The improved square-root Vélu's formulas exhibit a degree of parallelizability that has not been exploited in major implementations. In this study, we introduce a theoretical framework for parallelizing isogeny computations and provide a proof-of-concept implementation in C with OpenMP. While the parallelization effectiveness exhibits diminishing returns with the number of cores, we still obtain strong results when using a small number of cores. Concretely, our implementation shows that for large degrees it is easy to achieve speedup factors of up to $1.74$, $2.54$, and $3.44$ for two, four, and eight cores, respectively.
Last updated:  2024-05-30
Constant-Round Arguments for Batch-Verification and Bounded-Space Computations from One-Way Functions
Noga Amit and Guy N. Rothblum
What are the minimal cryptographic assumptions that suffice for constructing efficient argument systems, and for which tasks? Recently, Amit and Rothblum [STOC 2023] showed that one-way functions suffice for constructing constant-round arguments for bounded-depth computations. In this work we ask: what other tasks have efficient argument systems based only on one-way functions? We show two positive results: First, we construct a new argument system for batch-verification of $k$ $UP$ statements ($NP$ statements with a unique witness) for witness relations that are verifiable in depth $D$. Taking $M$ to be the length of a single witness, the communication complexity is $O(\log k) \cdot (M + k \cdot D \cdot n^{\sigma})$, where $\sigma > 0$ is an arbitrarily small constant. In particular, the communication is quasi-linear in the length of a single witness, so long as ${k < M / (D \cdot n^{\sigma})}$. The number of rounds is constant and the honest prover runs in polynomial time given witnesses for all $k$ inputs' membership in the language. Our second result is a constant-round doubly-efficient argument system for languages in $P$ that are computable by bounded-space Turing machines. For this class of computations, we obtain an exponential improvement in the trade-off between the number of rounds and the (exponent of the) communication complexity, compared to known unconditionally sound protocols [Reingold, Rothblum and Rothblum, STOC 2016].
Last updated:  2024-05-30
Fast, Lagre Scale Dimensionality Reduction Schemes Based on CKKS
Haonan Yuan, Wenyuan Wu, and Jingwei Chen
The proliferation of artificial intelligence and big data has resulted in a surge in data demand and increased data dimensionality. This escalation has consequently heightened the costs associated with storage and processing. Concurrently, the confidential nature of data collected by various institutions, which cannot be disclosed due to personal privacy concerns, has exacerbated the challenges associated with data analysis and machine learning model training. Therefore, designing a secure and efficient high-dimensional data reduction method that supports multi-party joint participation becomes critical to solving these problems. This paper proposes a novel homomorphic encryption dimensionality reduction scheme (HE-DR) based on CKKS, which modifies the Rank-Revealing (RR) method to make it more applicable to fully homomorphic encryption, thereby achieving fast and secure dimension reduction for high-dimensional data. Compared to traditional homomorphic encryption dimensionality reduction schemes, our approach does not transmit the user’s original data to other participants in any format (Ciphertext or Plaintext). Moreover, our method's computational efficiency is nearly $60-200$ times faster than similar algorithms, and the communication overhead is only $1/3$ of theirs. Finally, we have shown that our proposed scheme can preserve its computational efficiency and accuracy even when dealing with high-dimensional data. As dimensionality escalates, the ratio of ciphertext to plaintext computational efficiency plateaus at approximately 5 times, while the computational error (distance between subspaces) remains around $1e^{-11}$
Last updated:  2024-05-30
How (Not) to Simulate PLONK
Marek Sefranek
PLONK is a zk-SNARK system by Gabizon, Williamson, and Ciobotaru with proofs of constant size (0.5 KB) and sublinear verification time. Its setup is circuit-independent supporting proofs of arbitrary statements up to a certain size bound. Although deployed in several real-world applications, PLONK's zero-knowledge property had only been argued informally. Consequently, we were able to find and fix a vulnerability in its original specification, leading to an update of PLONK in eprint version 20220629:105924. In this work, we construct a simulator for the patched version of PLONK and prove that it achieves statistical zero knowledge. Furthermore, we give an attack on the previous version of PLONK showing that it does not even satisfy the weaker notion of (statistical) witness indistinguishability.
Last updated:  2024-05-31
More Efficient Approximate $k$-wise Independent Permutations from Random Reversible Circuits via log-Sobolev Inequalities
Lucas Gretta, William He, and Angelos Pelecanos
We prove that the permutation computed by a reversible circuit with $\widetilde{O}(nk\cdot \log(1/\epsilon))$ random $3$-bit gates is $\epsilon$-approximately $k$-wise independent. Our bound improves on currently known bounds in the regime when the approximation error $\epsilon$ is not too small. We obtain our results by analyzing the log-Sobolev constants of appropriate Markov chains rather than their spectral gaps.
Last updated:  2024-05-29
Distributed Asynchronous Remote Key Generation
Mark Manulis and Hugo Nartz
Asynchronous Remote Key Generation (ARKG) is a primitive introduced by Frymann et al. at ACM CCS 2020. It enables a sender to generate a new public key $pk'$ for a receiver ensuring only it can, at a later time, compute the corresponding private key sk'. These key pairs are indistinguishable from freshly generated ones and can be used in various public-key cryptosystems such as digital signatures and public-key encryption. ARKG has been explored for applications in WebAuthn credential backup and delegation, as well as for enhancing receiver privacy via stealth addresses. In this paper, we introduce distributed ARKG (dARKG) aiming to provide similar security properties in a distributed setting. Here, a sender generates $pk'$ for a group of $n$ receivers and the corresponding $sk'$ can only be computed by any sub-group of size $t\leq n$. This introduces threshold-based access protection for $sk'$, enabling for instance a set of proxies to jointly access a WebAuthn account or claim blockchain funds. We construct dARKG using one-round publicly verifiable asymmetric key agreement, called 1PVAKA, a new primitive formalized in this work. Unlike traditional distributed key generation protocols where users interact with one another, 1PVAKA is asynchronous and allows a third party to verify and generate a public key from users' outputs. We discuss 1PVAKA and dARKG instantiations tailored for use with bilinear groups and demonstrate practicality with implementation and performance analysis for the BLS12-381 curve.
Last updated:  2024-05-29
PathGES: An Efficient and Secure Graph Encryption Scheme for Shortest Path Queries
Francesca Falzon, Esha Ghosh, Kenneth G. Paterson, and Roberto Tamassia
The increasing importance of graph databases and cloud storage services prompts the study of private queries on graphs. We propose PathGES, a graph encryption scheme (GES) for single-pair shortest path queries. PathGES is efficient and mitigates the state-of-the-art attack by Falzon and Paterson (2022) on the GES by Ghosh, Kamara, and Tamassia (2021), while only incurring an additional logarithmic factor in storage overhead. PathGES leverages a novel data structure that minimizes leakage and server computation. We generalize what it means for one leakage function to leak less than another by defining a relation with respect to a family of query sequences and show that our scheme provably leaks less than the GKT scheme when all queries have been issued. We complement our security proof with a cryptanalysis that demonstrates an information-theoretic gap in the size of the query reconstruction space of our scheme as compared to the GKT scheme and provide concrete examples of the gap for several graph families. Our prototype implementation of PathGES is efficient in practice for real-world social network and geographic data sets. In comparison with the GKT scheme, PathGES has on average the same response size and up to 1.5$\times$ faster round-trip query time.
Last updated:  2024-05-29
Finding Dense Submodules with Algebraic Lattice Reduction
Alexander Karenin and Elena Kirshanova
We prove an algebraic analogue of Pataki-Tural lemma (Pataki-Tural, arXiv:0804.4014, 2008) -- the main tool in analysing the so-called overstretched regime of NTRU. Our result generalizes this lemma from Euclidean lattices to modules over any number field enabling us to look at NTRU as rank-2 module over cyclotomic number fields with a rank-1 dense submodule generated by the NTRU secret key. For Euclidean lattices, this overstretched regime occurs for large moduli $q$ and enables to detect a dense sublattice in NTRU lattices leading to faster NTRU key recovery. We formulate an algebraic version of this event, the so-called Dense Submodule Discovery (DSD) event, and heuristically predict under which conditions this event happens. For that, we formulate an algebraic version of the Geometric Series Assumption -- an heuristic tool that describes the behaviour of algebraic lattice reduction algorithms. We verify this assumption by implementing an algebraic LLL -- an analog of classical LLL lattice reduction that operates on the module level. Our experiments verify the introduced heuristic, enabling us to predict the algebraic DSD event.
Last updated:  2024-05-29
Formally verifying Kyber Episode V: Machine-checked IND-CCA security and correctness of ML-KEM in EasyCrypt
José Bacelar Almeida, Santiago Arranz Olmos, Manuel Barbosa, Gilles Barthe, François Dupressoir, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Cameron Low, Tiago Oliveira, Hugo Pacheco, Miguel Quaresma, Peter Schwabe, and Pierre-Yves Strub
We present a formally verified proof of the correctness and IND-CCA security of ML-KEM, the Kyber-based Key Encapsulation Mechanism (KEM) undergoing standardization by NIST. The proof is machine-checked in EasyCrypt and it includes: 1) A formalization of the correctness (decryption failure probability) and IND-CPA security of the Kyber base public-key encryption scheme, following Bos et al. at Euro S&P 2018; 2) A formalization of the relevant variant of the Fujisaki-Okamoto transform in the Random Oracle Model (ROM), which follows closely (but not exactly) Hofheinz, Hovelmanns and Kiltz at TCC 2017; 3) A proof that the IND-CCA security of the ML-KEM specification and its correctness as a KEM follows from the previous results; 4) Two formally verified implementations of ML-KEM written in Jasmin that are provably constant-time, functionally equivalent to the ML-KEM specification and, for this reason, inherit the provable security guarantees established in the previous points. The top-level theorems give self-contained concrete bounds for the correctness and security of MLKEM down to (a variant of) Module-LWE. We discuss how they are built modularly by leveraging various EasyCrypt features.
Last updated:  2024-05-31
Computation Efficient Structure Aware PSI From Incremental Function Secret Sharing
Gayathri Garimella, Benjamin Goff, and Peihan Miao
Structure-Aware Private Set Intersection (sa-PSI), recently introduced by Garimella et al. (Crypto'22), is a PSI variant where Alice's input set $S_A$ has a publicly known structure (for example, interval, ball or union of balls) and Bob's input $S_B$ is an unstructured set of elements. Prior work achieves sa-PSI where the communication cost only scales with the description size of $S_A$ instead of the set cardinality. However, the computation cost remains linear in the cardinality of $S_A$, which could be prohibitively large. In this work, we present a new semi-honest sa-PSI framework where both computation and communication costs only scale with the description size of $S_A$. Our main building block is a new primitive that we introduce called Incremental Boolean Function Secret Sharing (ibFSS), which is a generalization of FSS that additionally allows for evaluation on input prefixes. We formalize definitions and construct a weak ibFSS for a $d$-dimensional ball with $\ell_\infty$ norm, which may be of independent interest. Independently, we improve spatial hashing techniques (from prior work) when $S_A$ has structure union of $d$-dimensional balls in $(\{0,1\}^u)^d$, each of diameter $\delta$, from $O(u \cdot d \cdot (\log \delta)^d)$ to $O(\log \delta \cdot d)$ in terms of both computation and communication. Finally, we resolve the following open questions from prior work with communication and computation scaling with the description size of the structured set. - Our PSI framework can handle a union of overlapping structures, while prior work strictly requires a disjoint union. - We have a new construction that enables Bob with unstructured input $S_B$ to learn the intersection. - We extend to a richer class of functionalities like structure-aware PSI Cardinality and PSI-Sum of associated values.
Last updated:  2024-05-29
Two generalizations of almost perfect nonlinearity
Claude Carlet
Almost perfect nonlinear (in brief, APN) functions are (so-called vectorial) functions $F: F_2^n\to F_2^n$ playing roles in several domains of information protection, at the intersection of computer science and mathematics. Their definition comes from cryptography and is also related to coding theory. The cryptographic motivation for studying APN functions is that, when they are used as substitution boxes (S-boxes), ensuring nonlinearity in block ciphers, they contribute optimally to the resistance against differential attacks. Their study has been very active since the 90's, and has posed interesting and difficult mathematical questions, that are still unanswered. \\Since the introduction of differential attacks, more recent types of cryptanalyses have been designed, such as integral attacks. No notion about S-boxes has been identified which would play a similar role with respect to integral attacks. In this paper, we introduce and study two generalizations of almost perfect nonlinearity, that directly extend classical characterizations of APN functions, and are also related to the integral attack. The two resulting notions are significantly different (and behave differently) from differential uniformity, which is a well-known generalization of APNness; they also behave differently from each other, despite the apparent similarity between their definitions. We study the different ways to define them, and on the example of Kasami functions, how difficult they are to achieve. We prove their satisfiability, their monotonicity, their invariance under classical equivalence relations and we characterize them by the Walsh transform. We begin a study of the multiplicative inverse function (used as a substitution box in the Advanced Encryption Standard and other block ciphers) from the viewpoint of these two notions. In particular, we find a simple expression of the sum of the values taken by this function over affine subspaces of $\mathbb F_{2^n}$ that are not vector subspaces. This formula shows that, in such case, the sum never vanishes (which is a remarkable property of the inverse function).
Last updated:  2024-05-29
Batching-Efficient RAM using Updatable Lookup Arguments
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, and Nitin Singh
RAM (random access memory) is an important primitive in verifiable computation. In this paper, we focus on realizing RAM with efficient batching property, i.e, proving a batch of $m$ updates on a RAM of size $N$ while incurring a cost that is sublinear in $N$. Classical approaches based on Merkle-trees or address ordered transcripts to model RAM correctness are either concretely inefficient, or incur linear overhead in the size of the RAM. Recent works explore cryptographic accumulators based on unknown-order groups (RSA, class-groups) to model the RAM state. While recent RSA accumulator based approaches offer significant improvement over classical methods, they incur linear overhead in the size of the accumulated set to compute witnesses, as well as prohibitive constant overheads. We realize a batching-efficient RAM with superior asymptotic and concrete costs as compared to existing approaches. Towards this: (i) we build on recent constructions of lookup arguments to allow efficient lookups even in presence of table updates, and (ii) we realize a variant of sub-vector relation addressed in prior works, which we call committed index lookup. We combine the two building blocks to realize batching-efficient RAM with sublinear dependence on size of the RAM. Our construction incurs an amortized proving cost of $\tilde{O}(m\log m + \sqrt{mN})$ for a batch of $m$ updates on a RAM of size $N$. Our results also benefit the recent arguments for sub-vector relation, by enabling them to be efficient in presence of updates to the table. We believe that this is a contribution of independent interest. We implement our solution to evaluate its concrete efficiency. Our experiments show that it offers significant improvement over existing works on batching-efficient accumulators/RAMs, with a substantially reduced resource barrier.
Last updated:  2024-05-31
Almost optimal succinct arguments for Boolean circuit on RAM
Tiancheng Xie and Tianyi Liu
The significance of succinct zero-knowledge proofs has increased considerably in recent times. However, one of the major challenges that hinder the prover's efficiency is when dealing with Boolean circuits. In particular, the conversion of each bit into a finite field element incurs a blow-up of more than 100x in terms of both memory usage and computation time. This work focuses on data-parallel Boolean circuits that contain numerous identical sub-circuits. These circuits are widely used in real-world applications, such as proving a large number of hash-preimages in zkEVM and zkBridge. We develop a method for constructing succinct arguments with $2^{-\lambda}$ soundness error and $O(\omega(1)\frac{N}{\log{N}} \log{\log{N}})$ RAM operations, or $O(\frac{N}{\log{N}}\log\log N)$ finite field additions, along with a negligible number of finite field multiplications. Our approach is based on using the GKR protocol to obtain the succinct argument.
Last updated:  2024-05-28
Verifiable Secret Sharing from Symmetric Key Cryptography with Improved Optimistic Complexity
Ignacio Cascudo, Daniele Cozzo, and Emanuele Giunta
In this paper we propose verifiable secret sharing (VSS) schemes secure for any honest majority in the synchronous model, and that only use symmetric-key cryptographic tools, therefore having plausibly post-quantum security. Compared to the state-of-the-art scheme with these features (Atapoor et al., Asiacrypt `23), our main improvement lies on the complexity of the ``optimistic'' scenario where the dealer and all but a small number of receivers behave honestly in the sharing phase: in this case, the running time and download complexity (amount of information read) of each honest verifier is polylogarithmic and the total amount of broadcast information by the dealer is logarithmic; all these complexities were linear in the aforementioned work by Atapoor et al. At the same time, we preserve these complexities with respect to the previous work for the ``pessimistic'' case where the dealer or $O(n)$ receivers cheat actively. The new VSS protocol is of interest in multiparty computations where each party runs one VSS as a dealer, such as distributed key generation protocols. Our main technical handle is a distributed zero-knowledge proof of low degreeness of a polynomial, in the model of Boneh et al. (Crypto `19) where the statement (in this case the evaluations of the witness polynomial) is distributed among several verifiers, each knowing one evaluation. Using folding techniques similar to FRI (Ben-Sasson et al., ICALP `18) we construct such a proof where each verifier receives polylogarithmic information and runs in polylogarithmic time.
Last updated:  2024-05-28
Fully Secure MPC and zk-FLIOP Over Rings: New Constructions, Improvements and Extensions
Anders Dalskov, Daniel Escudero, and Ariel Nof
We revisit the question of the overhead to achieve full security (i.e., guaranteed output delivery) in secure multiparty computation (MPC). Recent works have closed the gap between full security and semi-honest security, by introducing protocols where the parties first compute the circuit using a semi-honest protocol and then run a verification step with sublinear communication in the circuit size. However, in these works the number of interaction rounds in the verification step is also sublinear in the circuit's size. Unlike communication, the round complexity of the semi-honest execution typically grows with the circuit's depth and not its size. Hence, for large but shallow circuits, this additional number of rounds incurs a significant overhead. Motivated by this gap, we make the following contributions: 1. We present a new MPC framework to obtain full security, compatible with effectively any ring, that has an additive communication overhead of only $O(\log |C|)$, where $|C|$ is the number of multiplication gates in the circuit, and a constant number of additional rounds beyond the underlying semi-honest protocol. Our framework works with any linear secret sharing scheme and relies on a new to utilize the machinery of zero-knowledge fully linear interactive oracle proofs (zk-FLIOP) in a black-box way. We present several instantiations to the building blocks of our compiler, from which we derive concretely efficient protocols in different settings. 2. We present extensions to the zk-FLIOP primitive for very general settings. The first one is for proving statements over potentially non-commutative rings, where the only requirement is that the ring has a large enough set where (1) every element in the set commutes with every element in the ring, and (2) the difference between any two distinct elements is invertible. Our second zk-FLIOP extension is for proving statements over Galois Rings. For these rings, we present concrete improvements on the current state-of-the-art for the case of constant-round proofs, by making use of Reverse Multiplication Friendly Embeddings (RMFEs).
Last updated:  2024-05-28
The Round Complexity of Proofs in the Bounded Quantum Storage Model
Alex B. Grilo and Philippe Lamontagne
The round complexity of interactive proof systems is a key question of practical and theoretical relevance in complexity theory and cryptography. Moreover, results such as QIP = QIP(3) (STOC'00) show that quantum resources significantly help in such a task. In this work, we initiate the study of round compression of protocols in the bounded quantum storage model (BQSM). In this model, the malicious parties have a bounded quantum memory and they cannot store the all the qubits that are transmitted in the protocol. Our main results in this setting are the following: 1. There is a non-interactive (statistical) witness indistinguishable proof for any language in NP (and even QMA) in BQSM in the plain model. We notice that in this protocol, only the memory of the verifier is bounded. 2. Any classical proof system can be compressed in a two-message quantum proof system in BQSM. Moreover, if the original proof system is zero-knowledge, the quantum protocol is zero-knowledge too. In this result, we assume that the prover has bounded memory. Finally, we give evidence towards the "tightness" of our results. First, we show that NIZK in the plain model against BQS adversaries is unlikely with standard techniques. Second, we prove that without the BQS model there is no 2-message zero-knowledge quantum interactive proof, even under computational assumptions.
Last updated:  2024-05-28
Provable security against decryption failure attacks from LWE
Christian Majenz and Fabrizio Sisinni
In a recent work, Hövelmanns, Hülsing and Majenz introduced a new security proof for the Fujisaki-Okamoto transform in the quantum-accessible random oracle model (QROM) used in post-quantum key encapsulation mechanisms. While having a smaller security loss due to decryption failures present in many constructions, it requires two new security properties of the underlying public-key encryption scheme (PKE). In this work, we show that one of the properties, Find Failing Plaintexts - Non Generic (FFP-NG) security, is achievable using a relatively efficient LWE-based PKE that does not have perfect correctness. In particular, we show that LWE reduces to breaking FFP-NG security of the PVW scheme, when all LWE errors are discrete Gaussian distributed. The reduction has an arbitrarily small constant multiplicative loss in LWE error size. For the proof, we make use of techniques by Genise, Micciancio, Peikert and Walter to analyze marginal and conditional distributions of sums of discrete Gaussians.
Last updated:  2024-05-28
Fine-Grained Non-Interactive Key Exchange, Revisited
Balthazar Bauer, Geoffroy Couteau, and Elahe Sadeghi
We revisit the construction of multiparty non-interactive key-exchange protocols with fine-grained security, which was recently studied in (Afshar et al., Eurocrypt 2023). Their work introduced a 4-party non-interactive key exchange with quadratic hardness, and proved it secure in Shoup's generic group model. This positive result was complemented with a proof that $n$-party non-interactive key exchange with superquadratic security cannot exist in Maurer's generic group model, for any $n\geq 3$. Because Shoup's model is stronger than Maurer's model, this leaves a gap between the positive and the negative result, and their work left as an open question the goal of closing this gap, and of obtaining fine-grained non-interactive key exchange without relying on idealized models. In this work, we make significant progress on both questions. We obtain two main results: A 4-party non-interactive key exchange protocol with quadratic security gap, assuming the existence of exponentially secure injective pseudorandom generators, and the subexponential hardness of the computational Diffie-Hellman assumption. In addition, our scheme is conceptually simpler, and can be generalized to other settings (with more parties or from other assumptions). Assuming the existence of non-uniformly secure injective pseudorandom generators with exponential hardness, we further show that our protocol is secure in Maurer's model, albeit with a smaller hardness gap (up to $N^{1.6}$), making progress on filling the gap between the positive and the negative result of (Afshar et al., Eurocrypt 2023). Somewhat intriguingly, proving the security of our scheme in Maurer's idealized model turns out to be significantly harder than proving its security in the standard model.
Last updated:  2024-05-28
INDIANA - Verifying (Random) Probing Security through Indistinguishability Analysis
Christof Beierle, Jakob Feldtkeller, Anna Guinet, Tim Güneysu, Gregor Leander, Jan Richter-Brockmann, and Pascal Sasdrich
Despite masking being a prevalent protection against passive side-channel attacks, implementing it securely in hardware remains a manual, challenging, and error-prone process. This paper introduces INDIANA, a comprehensive security verification tool for hardware masking. It provides a hardware verification framework, enabling a complete analysis of simulation-based security in the glitch-extended probing model, with cycle-accurate estimations for leakage probabilities in the random probing model. Notably, INDIANA is the first framework to analyze arbitrary masked circuits in both models, even at the scale of full SPN cipher rounds (e.g., AES), while delivering exact verification results. To attain precise and extensive verifiability, we introduce a partitionable probing distinguisher that enables rapid verification of probe tuples, outperforming state-of-the-art methods based on statistical independence. Most interestingly, our approach inherently facilitates extensions to the random probing model by leveraging Fast Fourier-Hadamard Transformations (FFTs). Benchmark results show that INDIANA competes effectively with leading probing model verification tools, such as maskVerif and VERICA. Notably, INDIANA the first tool that is capable to provide cycle-accurate estimations of random probing leakage probabilities for large-scale masked circuits.
Last updated:  2024-05-28
Hamming Weight Proofs of Proximity with One-Sided Error
Gal Arnon, Shany Ben-David, and Eylon Yogev
We provide a wide systematic study of proximity proofs with one-sided error for the Hamming weight problem $\mathsf{Ham}_{\alpha}$ (the language of bit vectors with Hamming weight at least $\alpha$), surpassing previously known results for this problem. We demonstrate the usefulness of the one-sided error property in applications: no malicious party can frame an honest prover as cheating by presenting verifier randomness that leads to a rejection. We show proofs of proximity for $\mathsf{Ham}_{\alpha}$ with one-sided error and sublinear proof length in three models (MA, PCP, IOP), where stronger models allow for smaller query complexity. For $n$-bit input vectors, highlighting input query complexity, our MA has $O(\mathrm{log} n)$ query complexity, the PCP makes $O(\mathrm{loglog} n)$ queries, and the IOP makes a single input query. The prover in all of our applications runs in expected quasi-linear time. Additionally, we show that any perfectly complete IP of proximity for $\mathsf{Ham}_{\alpha}$ with input query complexity $n^{1-\epsilon}$ has proof length $\Omega(\mathrm{log} n)$. Furthermore, we study PCPs of proximity where the verifier is restricted to making a single input query (SIQ). We show that any SIQ-PCP for $\mathsf{Ham}_{\alpha}$ must have a linear proof length, and complement this by presenting a SIQ-PCP with proof length $n+o(n)$. As an application, we provide new methods that transform PCPs (and IOPs) for arbitrary languages with nonzero completeness error into PCPs (and IOPs) that exhibit perfect completeness. These transformations achieve parameters previously unattained.
Last updated:  2024-05-28
Tight Characterizations for Preprocessing against Cryptographic Salting
Fangqi Dong, Qipeng Liu, and Kewen Wu
Cryptography often considers the strongest yet plausible attacks in the real world. Preprocessing (a.k.a. non-uniform attack) plays an important role in both theory and practice: an efficient online attacker can take advantage of advice prepared by a time-consuming preprocessing stage. Salting is a heuristic strategy to counter preprocessing attacks by feeding a small amount of randomness to the cryptographic primitive. We present general and tight characterizations of preprocessing against cryptographic salting, with upper bounds matching the advantages of the most intuitive attack. Our result quantitatively strengthens the previous work by Coretti, Dodis, Guo, and Steinberger (EUROCRYPT'18). Our proof exploits a novel connection between the non-uniform security of salted games and direct product theorems for memoryless algorithms. For quantum adversaries, we give similar characterizations for property finding games, resolving an open problem of the quantum non-uniform security of salted collision resistant hash by Chung, Guo, Liu, and Qian (FOCS'20). Our proof extends the compressed oracle framework of Zhandry (CRYPTO'19) to prove quantum strong direct product theorems for property finding games in the average-case hardness.
Last updated:  2024-05-28
How (not) to Build Quantum PKE in Minicrypt
Longcheng Li, Qian Li, Xingjian Li, and Qipeng Liu
The seminal work by Impagliazzo and Rudich (STOC'89) demonstrated the impossibility of constructing classical public key encryption (PKE) from one-way functions (OWF) in a black-box manner. However, the question remains: can quantum PKE (QPKE) be constructed from quantumly secure OWF? A recent line of work has shown that it is indeed possible to build QPKE from OWF, but with one caveat --- they rely on quantum public keys, which cannot be authenticated and reused. In this work, we re-examine the possibility of perfect complete QPKE in the quantum random oracle model (QROM), where OWF exists. Our first main result: QPKE with classical public keys, secret keys and ciphertext, does not exist in the QROM, if the key generation only makes classical queries. Therefore, a necessary condition for constructing such QPKE from OWF is to have the key generation classically ``un-simulatable’’. Previous discussions (Austrin~et al. CRYPTO'22) on the impossibility of QPKE from OWF rely on a seemingly strong conjecture. Our work makes a significant step towards a complete and unconditional quantization of Impagliazzo and Rudich’s results. Our second main result extends to QPKE with quantum public keys. The second main result: QPKE with quantum public keys, classical secret keys and ciphertext, does not exist in the QROM, if the key generation only makes classical queries and the quantum public key is either pure or ``efficiently clonable''. The result is tight due to all existing QPKEs constructions. Our result further gives evidence on why existing QPKEs lose reusability. To achieve these results, we use a novel argument based on conditional mutual information and quantum Markov chain by Fawzi and Renner (Communications in Mathematical Physics). We believe the techniques used in the work will find other usefulness in separations in quantum cryptography/complexity.
Last updated:  2024-05-28
Multi-Server Doubly Efficient PIR
Arthur Lazzaretti, Zeyu Liu, Ben Fisch, and Charalampos Papamanthou
Doubly Efficient Private Information Retrieval (DEPIR) pertains to a Private Information Retrieval (PIR) scheme with both near-linear server-side preprocessing time and sublinear query time. Very recently, Lin et al. (STOC '23) devised the first single-server DEPIR scheme from standard assumptions. However, their work left one major question open: can we build a DEPIR scheme in the multi-server setting, without relying on heavy cryptographic machinery? In this work, we propose the first doubly efficient information-theoretic PIR scheme, in the multi-server setting. For a database of size $N$, our scheme allows servers to answer an infinite number of client queries in $N^{o(1)}$ time, after a single preprocessing phase which takes $N^{1+o(1)}$ time. Our result is only a $N^{o(1)}$ factor from the lower bound proven by Persiano and Yeo (SODA '22) for this setting. In addition, we devise a second information-theoretic PIR scheme which pushes the state of the art for the setting where the number of servers is more constrained. It achieves equally fast query times as our first scheme above, and a preprocessing time of $N^{2+o(1)}$.
Last updated:  2024-05-27
Post-quantum XML and SAML Single Sign-On
Johannes Müller and Jan Oupický
Extensible Markup Language (XML) is one of the most popular serialization languages. Since many security protocols are built using XML, it also provides cryptographic functionality. A central framework in this area is the Security Assertion Markup Language (SAML). This standard is one of the most widely used options for implementing Single Sign-On (SSO), which allows users to authenticate to different service providers using the credentials from a single identity provider. Like all other security protocols currently in use, the security and privacy of XML-based frameworks such as SAML is threatened by the development of increasingly powerful quantum computers. In fact, future attackers with access to scalable quantum computers will be able to break the currently used cryptographic building blocks and thus undermine the security of the SAML SSO to illegally access sensitive private information. Post-quantum cryptography algorithms have been developed to protect against such quantum attackers. While many security protocols have been migrated into the quantum age by using post-quantum cryptography, no such solutions for XML and the security protocols based on it have been developed, let alone tested. We make the following contributions to fill this gap. We have designed post-quantum solutions for the cryptographic building blocks in XML and integrated them into the SAML SSO protocol. We implemented our solutions in the OpenSAML, Apache Santuario, and BouncyCastle libraries and extensively tested their performance for various post-quantum instantiations. As a result, we have created a comprehensive and solid foundation for post-quantum XML and post-quantum SAML SSO migration.
Last updated:  2024-05-27
Multivariate Multi-Polynomial Commitment and its Applications
Xiao Yang, Chengru Zhang, Mark Ryan, and Gao Meng
We introduce and formally define Multivariate Multi-Polynomial (MMP) commitment, a commitment scheme on multiple multivariate polynomials, and illustrate the concept with an efficient construction, which enjoys constant commitment size and logarithmic proof size. We further enhance our MMP scheme to achieve the zero-knowledge property. Additionally, combined with a novel zero-knowledge range proof for Pedersen subvector commitment, we present a Zero-Knowledge Range Proof (ZKRP) for MMP commitment. We present two sample applications. Firstly, our MMP commitment can be used for efficient aggregation of SNARK based on multivariate polynomial commitments. As a showcase, we apply MMP commitment to HyperPlonk and refer to this variant of HyperPlonk as aHyperPlonk. For $k$ instances, each with circuit size $n$, the communication and verification complexity is reduced from $O(k \cdot \log n)$ to $O(\log k + \log n)$, while the prover complexity remains the same. Secondly, we propose a novel zero-knowledge proof for vehicle GPS traces based on ZKRP for MMP, which allows vehicle owners to prove if a vehicle has/hasn't passed through some location during a specific time interval.
Last updated:  2024-05-27
Securing Lightning Channels against Rational Miners
Lukas Aumayr, Zeta Avarikioti, Matteo Maffei, and Subhra Mazumdar
Payment channel networks (e.g., the Lightning Network in Bitcoin) constitute one of the most popular scalability solutions for blockchains. Their safety relies on parties being online to detect fraud attempts on-chain and being able to timely react by publishing certain transactions on-chain. However, a cheating party may bribe miners in order to censor those transactions, resulting in loss of funds for the cheated party: these attacks are known in the literature as timelock-bribing attacks. In this work, we present the first channel construction that does not require parties to be online and, at the same time, is resistant to time-lock bribing attacks. We start by proving for the first time that Lightning channels are secure against timelock bribing attacks in the presence of rational channel parties under the assumption that these parties constantly monitor the mempool and never deplete the channel in one direction. The latter underscores the importance of keeping a coin reserve in each channel as implemented in the Lightning Network, albeit for different reasons. We show, however, that the security of the Lightning Network against Byzantine channel parties does not carry over to a setting in which miners are rational and accept timelock bribes. Next, we introduce CRAB, the first Lightning-compatible channel construction that provides security against Byzantine channel parties and rational miners. CRAB leverages miners' incentives to safeguard the channel, thereby also forgoing the unrealistic assumption of channel parties constantly monitoring the mempool. Finally, we show how our construction can be refined to eliminate the major assumption behind payment channels, i.e., the need for online participation. To that end, we present Sleepy CRAB the first provably secure channel construction under rational miners that enables participants to go offline indefinitely. We also provide a proof-of-concept implementation of Sleepy CRAB and evaluate its cost in Bitcoin, thereby demonstrating its practicality.
Last updated:  2024-05-27
KHAN Encryption Algorithm: Leveraging Full Reptend Primes
Ayaz Khan
The Keyed Hashing and Asymmetric Nonce (KHAN) encryption algorithm is a novel cryptographic scheme that utilizes the unique properties of full reptend prime numbers. This paper details the algorithm, its theoretical foundations, and the rigorous proofs of its security properties. By leveraging the characteristics of cyclic sequences derived from full reptend primes, KHAN provides robust encryption with high resistance to cryptanalytic attacks.
Last updated:  2024-05-27
Improved Meet-LWE Attack via Ternary Trees
Eunmin Lee, Joohee Lee, and Yuntao Wang
The Learning with Errors (LWE) problem with its variants over structured lattices has been widely exploited in efficient post-quantum cryptosystems. Recently, May suggests the Meet-LWE attack, which poses a significant advancement in the line of work on the Meet-in-the-Middle approach to analyze LWE with ternary secrets. In this work, we generalize and extend the idea of Meet-LWE by introducing ternary trees, which result in diverse representations of the secrets. More precisely, we split the secrets into three pieces with the same dimension and expand them into a ternary tree to leverage the increased representations to improve the overall attack complexity. We carefully analyze and optimize the time and memory costs of our attack algorithm exploiting ternary trees, and compare them to those of the Meet-LWE attack. With asymptotic and non-asymptotic comparisons, we observe that our attack provides improved estimations for all parameter settings, including those of the practical post-quantum schemes, compared to the Meet-LWE attack. We also evaluate the security of the Round 2 candidates of the KpqC competition which aims to standardize post-quantum public key cryptosystems in the Republic of Korea, and report that the estimated complexities for our attack applied to SMAUG-T are lower than the claimed for some of the recommended parameters.
Last updated:  2024-05-26
Batched Distributed Point Function from Sparse LPN and Homomorphic Secret Sharing
Lucas Piske, Jaspal Singh, and Ni Trieu
A function secret sharing (FSS) scheme ($\mathsf{gen},\mathsf{eval}$) for a class of programs $\mathcal{F}$ allows a dealer to secret share any function $f \in \mathcal{F}$, such that each function share hides the function, and the shares can be used to non-interactively compute additive shares of $f(x)$ for any input $x$. All FSS related applications often requires the dealer to generate and share secret sharings for a batch of functions. We initiate the study of batched function secret sharing - where the objective is to secret share a set of functions from the class $\mathcal{F}$ while minimizing the size of the collection of FSS keys. We use standard homomorphic secret sharing (HSS) schemes, variant of the Learning with Parity Noise assumption and the Quadratic Residuosity assumption to construct batched FSS schemes for point functions with single-bit and multi-bit output. Our scheme is asymptotically superior than naively batching state of the art FSS schemes for point functions. Concretely our batch key sizes are smaller by a factor of $3-80\times$ as batch size is increased from $2^{13}$ to $2^{19}$. Although our protocol relies on public key operations, it exhibits inefficiency in a LAN setting. However, it demonstrates up to a 120-fold improvement in a WAN setting with slow network bandwidth. As a building block in our protocols, we introduce a new HSS ciphertext compression algorithm, that can decompress a short compressed ciphertext to give low noise ciphertexts of array of input message bits. This primitive might be of independent interest for other HSS related applications.
Last updated:  2024-05-27
Early Stopping Byzantine Agreement in $(1+\epsilon)\cdot f$ Rounds
Fatima Elsheimy, Julian Loss, and Charalampos Papamanthou
In this paper, we present two early stopping Byzantine agreement protocols in the authenticated setting against a corrupt minority $t < n/2$, where $t$ represents the maximum number of malicious parties. Early stopping protocols ensure termination within a number of rounds determined solely by the actual number of malicious nodes $f$ present during execution, irrespective of $t$. Our first protocol is deterministic and ensures early stopping termination in $ (d+5) \cdot (\lfloor f/d \rfloor +3)$ rounds, where $d$ is a fixed constant. For example, for all $d\ge 6$, our protocol runs in at most $(1+\epsilon )\cdot f$ rounds (where $0<\epsilon<1$), improving (for large $f$) upon the best previous early stopping deterministic broadcast protocol by Perry and Toueg, which terminates in $min(2f+4,2t+2)$ rounds. Additionally, our second protocol is randomized, ensuring termination in an expected constant number of rounds and achieving early stopping in $(d+9) \cdot (\lfloor f/d \rfloor +2)$ rounds in the worst case. This marks a significant improvement over a similar result by Goldreich and Petrank, which always requires an expected constant number of rounds and $O(t)$ rounds in the worst case, i.e., does not have the early stopping property.
Last updated:  2024-05-26
A General Framework for Lattice-Based ABE Using Evasive Inner-Product Functional Encryption
Yao-Ching Hsieh, Huijia Lin, and Ji Luo
We present a general framework for constructing attribute-based encryption (ABE) schemes for arbitrary function class based on lattices from two ingredients, i) a noisy linear secret sharing scheme for the class and ii) a new type of inner-product functional encryption (IPFE) scheme, termed *evasive* IPFE, which we introduce in this work. We propose lattice-based evasive IPFE schemes and establish their security under simple conditions based on variants of evasive learning with errors (LWE) assumption recently proposed by Wee [EUROCRYPT ’22] and Tsabary [CRYPTO ’22]. Our general framework is modular and conceptually simple, reducing the task of constructing ABE to that of constructing noisy linear secret sharing schemes, a more lightweight primitive. The versatility of our framework is demonstrated by three new ABE schemes based on variants of the evasive LWE assumption. - We obtain two ciphertext-policy ABE schemes for all polynomial-size circuits with a predetermined depth bound. One of these schemes has *succinct* ciphertexts and secret keys, of size polynomial in the depth bound, rather than the circuit size. This eliminates the need for tensor LWE, another new assumption, from the previous state-of-the-art construction by Wee [EUROCRYPT ’22]. - We develop ciphertext-policy and key-policy ABE schemes for deterministic finite automata (DFA) and logspace Turing machines ($\mathsf{L}$). They are the first lattice-based public-key ABE schemes supporting uniform models of computation. Previous lattice-based schemes for uniform computation were limited to the secret-key setting or offered only weaker security against bounded collusion. Lastly, the new primitive of evasive IPFE serves as the lattice-based counterpart of pairing-based IPFE, enabling the application of techniques developed in pairing-based ABE constructions to lattice-based constructions. We believe it is of independent interest and may find other applications.
Last updated:  2024-05-26
Rate-1 Arithmetic Garbling from Homomorphic Secret-Sharing
Pierre Meyer, Claudio Orlandi, Lawrence Roy, and Peter Scholl
We present a new approach to garbling arithmetic circuits using techniques from homomorphic secret sharing, obtaining constructions with high rate that support free addition gates. In particular, we build upon non-interactive protocols for computing distributed discrete logarithms in groups with an easy discrete-log subgroup, further demonstrating the versatility of tools from homomorphic secret sharing. Relying on distributed discrete log for the Damgård-Jurik cryptosystem (Roy and Singh, Crypto `21), whose security follows from the decisional composite residuosity assumption (DCR), we get the following main results: [**Two ciphertexts per multiplication, from IND-CPA security of Damgård-Jurik.**] Assuming the Damgård-Jurik cryptosystem is semantically secure (which follows from DCR), there is a garbling scheme for circuits with $B$-bounded integer arithmetic using only two ciphertexts per multiplication. The total bit-size of the resulting garbled circuit is: $$(n + 2s_\times+2D_\times)\cdot (\zeta + 1) \cdot \log N$$ where $n$ is the number of inputs, $s_\times$ is the number of multiplications, $D_\times$ is the multiplicative depth of the circuit, $N$ is an RSA modulus and $N^{\zeta-1}$ is a rough bound on the magnitude of wire values in the computation. [**One ciphertext per multiplication, from KDM security of Damgård-Jurik.**] Assuming the Damgård-Jurik encryption scheme remains secure given encryption of the key and its inverse, the construction achieves rate-$1$. The total bit-size of the resulting garbled circuit is: $$(n + s_\times + 1) \cdot (\zeta + 1) \cdot \log N$$ where the parameters are as above, except $N^{\zeta-2}$ is the magnitude bound. As a side result, we show that our scheme based on IND-CPA security achieves rate $3/5$ for levelled circuits.
Last updated:  2024-05-27
A new stand-alone MAC construct called SMAC
Dachao Wang, Alexander Maximov, Patrik Ekdahl, and Thomas Johansson
In this paper, we present a new efficient stand-alone MAC construct based on processing using the FSM part of the stream cipher family SNOW, which in turn uses the AES round function. It offers a combination of very high speed in software and hardware with a truncatable tag. Two concrete versions of SMAC are proposed with different security levels, although other use cases are also possible. For example, SMAC can be combined with an external ciphering engine in AEAD mode. Every design choice is justified and supported by the results of our analysis and simulations. A novelty of the proposal is that it meets future performance requirements but is still not directly vulnerable to attacks using repeated nonce when the tag size is short, as is the case for other very fast MACs (MACs based on polynomial hashing). This can be an important aspect in practical applications.
Last updated:  2024-05-26
The Brave New World of Global Generic Groups and UC-Secure Zero-Overhead SNARKs
Jan Bobolz, Pooya Farshim, Markulf Kohlweiss, and Akira Takahashi
The universal composability (UC) model provides strong security guarantees for protocols used in arbitrary contexts. While these guarantees are highly desirable, in practice, schemes with a standalone proof of security, such as the Groth16 proof system, are preferred. This is because UC security typically comes with undesirable overhead, sometimes making UC-secure schemes significantly less efficient than their standalone counterparts. We establish the UC security of Groth16 without any significant overhead. In the spirit of global random oracles, we design a global (restricted) observable generic group functionality that models a natural notion of observability: computations that trace back to group elements derived from generators of other sessions are observable. This notion turns out to be surprisingly subtle to formalize. We provide a general framework for proving protocols secure in the presence of global generic groups, which we then apply to Groth16.
Last updated:  2024-05-26
DVA: Dangerous Variations of ALTEQ
Arnaud Sipasseuth
In this paper, we present three types of variations of the ALTEQ cryptosystem, a recent submission to the NIST's additional call for signatures. We name these Dangerous Variations of ALTEQ (DVA), as there is always a certain danger in stepping out of usual constructions, although we attempt to maintain heuristic security. First, we present DVA-GG (Graph Generalization), that can be seen as a more abstract point-of-view on the operations done in ALTEQ and encourages more research on the algebraic variants. In particular, we show this approach can lead to a patch counter to Beullens' recent seed collision attack on ALTEQ that only depends on the primitive, and showcase some fancy usages of the primitive for experimental protocols. Second, we present DVA-PC (Precomputations) which is ``likely'' as secure as ALTEQ in the random oracle model, and allow to drastically reduce the intermediate memory requirements within both the signature and verification process through an easily parallelizable extra operation. In particular, this facilitates precomputation variants with online phases that only depends on the complexity of basic matrix operations. We can then choose between either a tiny offline memory per signature, or get one of the fastest online signing speed for post-quantum cryptography. Third, we present DVA-DM (Distinct Matrices), some cryptanalytic targets that deviates from ALTEQ's original algebraic structure. Those structures can serve as plain computational acceleration or just compress data sizes, and provide good options to motivate the study of specialized cryptanalysis for ALTEQ: if those are safe, then ALTEQ gain safe variants, and otherwise, we gain further understanding of the problems. In particular, the ideas can be applied beyond ALTEQ and beyond, and hopefully extend to MEDS, LESS, and group-action-based cryptography.
Last updated:  2024-05-26
Zero-knowledge IOPs Approaching Witness Length
Noga Ron-Zewi and Mor Weiss
Interactive Oracle Proofs (IOPs) allow a probabilistic verifier interacting with a prover to verify the validity of an NP statement while reading only few bits from the prover messages. IOPs generalize standard Probabilistically-Checkable Proofs (PCPs) to the interactive setting, and in the few years since their introduction have already exhibited major improvements in main parameters of interest (such as the proof length and prover and verifier running times), which in turn led to significant improvements in constructions of succinct arguments. Zero-Knowledge (ZK) IOPs additionally guarantee that the view of any query-bounded (possibly malicious) verifier can be efficiently simulated. ZK-IOPs are the main building block of succinct ZK arguments which use the underlying cryptographic object (e.g., a collision-resistant hash function) as a black box. In this work, we construct the first ZK-IOPs approaching the witness length for a natural NP problem. More specifically, we design constant-query and constant-round IOPs for 3SAT in which the total communication is $(1+\gamma)m$, where $m$ is the number of variables and $\gamma>0$ is an arbitrarily small constant, and ZK holds against verifiers querying $m^\beta$ bits from the prover's messages, for a constant $\beta>0$. This gives a ZK variant of a recent result of Ron-Zewi and Rothblum (FOCS `20), who construct (non-ZK) IOPs approaching the witness length for a large class of NP languages. Previous constructions of ZK-IOPs incurred an (unspecified) large constant multiplicative overhead in the proof length, even when restricting to ZK against the honest verifier. We obtain our ZK-IOPs by improving the two main building blocks underlying most ZK-IOP constructions, namely ZK codes and ZK-IOPs for sumcheck. More specifically, we give the first ZK-IOPs for sumcheck that achieve both sublinear communication for sumchecking a general tensor code, and a ZK guarantee. We also show a strong ZK preservation property for tensors of ZK codes, which extends a recent result of Bootle, Chiesa, and Liu (EC `22). Given the central role of these objects in designing ZK-IOPs, these results might be of independent interest.
Last updated:  2024-05-26
Faster verifications and smaller signatures: Trade-offs for ALTEQ using rejections
Arnaud Sipasseuth
In this paper, we introduce a new probability function parameter in the instantiations of the Goldreich-Micali-Wigderson with Fiat-Shamir and unbalanced challenges used in ALTEQ, a recent NIST PQC candidate in the call for additional signatures. This probability set at 100% does not bring any changes in the scheme, but modifies the public challenge generation process when below 100%, by injecting potential rejections in otherwise completely valid inputs. From a theoretical point of view, this does not improve the asymptotical hardness of the scheme and negatively affects the efficiency of the signatory, and might itself seem trivial. However, from a practical point of view, implementation-wise and performance-wise, this triviality allows an extra degree of freedom in optimizing parameters, as the heuristic security level is also increased against forgers: previously valid combinations now can be deemed invalid. This allows us to make trade-offs to reduce the computational load in verifiers, accelerating verifications, marginally reduce the signature size, at the cost of making signatures slower and unlikely to be constant-time. In particular, this extra degree of freedom allows to make implementation choices that enable smoother and faster executions of the aforementioned protocols, especially in the context of parallelization using vectorized instructions. We also demonstrate the usefulness of our proposal to ALTEQ for other options, when slowing down the signing process is not an issue: significantly smaller signatures but longer verifications, or lower public key sizes. The ideas presented apply to any primitive, and can be used beyond ALTEQ.
Last updated:  2024-05-24
Succinct Homomorphic Secret Sharing
Damiano Abram, Lawrence Roy, and Peter Scholl
This work introduces homomorphic secret sharing (HSS) with succinct share size. In HSS, private inputs are shared between parties, who can then homomorphically evaluate a function on their shares, obtaining a share of the function output. In succinct HSS, a portion of the inputs can be distributed using shares whose size is sublinear in the number of such inputs. The parties can then locally evaluate a function $f$ on the shares, with the restriction that $f$ must be linear in the succinctly shared inputs. We construct succinct, two-party HSS for branching programs, based on either the decisional composite residuosity assumption, a DDH-like assumption in class groups, or learning with errors with a superpolynomial modulus-to-noise ratio. We then give several applications of succinct HSS, which were only previously known using fully homomorphic encryption, or stronger tools: - Succinct vector oblivious linear evaluation (VOLE): Two parties can obtain secret shares of a long, arbitrary vector $\vec x$, multiplied by a scalar $\Delta$, with communication sublinear in the size of the vector. - Batch, multi-party distributed point functions: A protocol for distributing a batch of secret, random point functions among $N$ parties, for any polynomial $N$, with communication sublinear in the number of DPFs. - Sublinear MPC for any number of parties: Two new constructions of MPC with sublinear communication complexity, with $N$ parties for any polynomial $N$: (1) For general layered Boolean circuits of size $s$, with communication $O(N s/\log\log s)$, and (2) For layered, sufficiently wide Boolean circuits, with communication $O(N s/\log s)$.
Last updated:  2024-05-28
How to Redact the Bitcoin Backbone Protocol
Mehmet Sabir Kiraz, Enrique Larraia, and Owen Vaughan
We explain how to extend the Bitcoin backbone model of Garay et al. (Eurocrypt, 2015) to accommodate for redactable blockchains. Our extension captures fluid blockchain-based databases (with mutability requirements) and compliance with existing legislation, such as the GDPR right to be forgotten, or the need to erase offending data from nodes’ databases that would otherwise provoke legal shutdowns. Our redactable backbone protocol retains the essential properties of blockchains. Leveraging zero-knowledge proofs, old data can be erased without requiring trusted third parties or heuristics about past chain validation. Our solution can be implemented on Bitcoin immediately without hard-forks, and it is scalable. It allows the redaction of data from UTXOs or unconfirmed transactions that have not yet flooded the network, while guaranteeing invariance of the Bitcoin state. Thus, offending data does not need to persist in the system, not even temporarily.
Last updated:  2024-05-24
Relations among new CCA security notions for approximate FHE
Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for exact FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2024-05-24
Traceable Secret Sharing Based on the Chinese Remainder Theorem
Charlotte Hoffmann
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box $R$ that contains $f$ leaked shares and, on input $t-f$ additional shares, outputs the secret $s$. A scheme is traceable if one can find out the leaked shares inside the box $R$ by only getting black-box access to $R$. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakely's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret. In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme by a factor of $2$.
Last updated:  2024-05-24
The Perils of Limited Key Reuse: Adaptive and Parallel Mismatch Attacks with Post-processing Against Kyber
Qian Guo, Erik Mårtensson, and Adrian Åström
In this paper, we study the robustness of Kyber, the Learning With Errors (LWE)-based Key Encapsulation Mechanism (KEM) chosen for standardization by NIST, against key mismatch attacks. We demonstrate that Kyber's security levels can be compromised with a few mismatch queries by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in CPA-secure Kyber. We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024. Furthermore, this approach has the potential to substantially improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks against the CCA-secure version of Kyber KEM.
Last updated:  2024-05-24
Reducing Overdefined Systems of Polynomial Equations Derived from Small Scale Variants of the AES via Data Mining Methods
Jana Berušková, Martin Jureček, and Olha Jurečková
This paper deals with reducing the secret key computation time of small scale variants of the AES cipher using algebraic cryptanalysis, which is accelerated by data mining methods. This work is based on the known plaintext attack and aims to speed up the calculation of the secret key by processing the polynomial equations extracted from plaintext-ciphertext pairs. Specifically, we propose to transform the overdefined system of polynomial equations over GF(2) into a new system so that the computation of the Gröbner basis using the F4 algorithm takes less time than in the case of the original system. The main idea is to group similar polynomials into clusters, and for each cluster, sum the two most similar polynomials, resulting in simpler polynomials. We compare different data mining techniques for finding similar polynomials, such as clustering or locality-sensitive hashing (LSH). Experimental results show that using the LSH technique, we get a system of equations for which we can calculate the Gröbner basis the fastest compared to the other methods that we consider in this work. Experimental results also show that the time to calculate the Gröbner basis for the transformed system of equations is significantly reduced compared to the case when the Gröbner basis was calculated from the original non-transformed system. This paper demonstrates that reducing an overdefined system of equations reduces the computation time for finding a secret key.
Last updated:  2024-05-24
Arma: Byzantine Fault Tolerant Consensus with Horizontal Scalability
Yacov Manevich, Hagar Meir, Kaoutar Elkhiyaoui, Yoav Tock, and May Buzaglo
Arma is a Byzantine Fault Tolerant (BFT) consensus system designed to achieve horizontal scalability across all hardware resources: network bandwidth, CPU, and disk I/O. As opposed to preceding BFT protocols, Arma separates the dissemination and validation of client transactions from the consensus process, restricting the latter to totally ordering only metadata of batches of transactions. This separation enables each party to distribute compute and storage resources for transaction validation, dissemination and disk I/O among multiple machines, resulting in horizontal scalability. Additionally, Arma ensures censorship resistance by imposing a maximum time limit on the inclusion of client transactions. We built and evaluated two Arma prototypes. The first is an independent system handling over 200,000 transactions per second, the second integrated into Hyperledger Fabric, speeding its consensus by an order of magnitude.
Last updated:  2024-05-27
Optimal Consensus in the Presence of Overlapping Faults and Total Omission
Julian Loss, Kecheng Shi, and Gilad Stern
Understanding the fault tolerance of Byzantine Agreement protocols is an important question in distributed computing. While the setting of Byzantine faults has been thoroughly explored in the literature, the (arguably more realistic) omission fault setting is far less studied. In this paper, we revisit the recent work of Loss and Stern who gave the first protocol in the mixed fault model tolerating $t$ Byzantine faults, $s$ send faults, and $r$ receive faults, when $2t+r+s<n$ and omission faults do not overlap. We observe that their protocol makes no guarantees when omission faults can overlap, i.e., when parties can simultaneously have send and receive faults. We give the first protocol that overcomes this limitation and tolerates the same number of potentially overlapping faults. We then study, for the first time, the total omission setting where all parties can become omission faulty. This setting is motivated by real-world scenarios where every party may experience connectivity issues from time to time, yet agreement should still hold for the parties who manage to output values. We show the first agreement protocol in this setting with parameters $s<n$ and $s+r=n$. On the other hand, we prove that there is no consensus protocol for the total omission setting which tolerates even a single overlapping omission fault, i.e., where $s+r=n+1$ and $s>2$, or a broadcast protocol for $s+r=n$ and $s>1$ even without overlapping faults.
Last updated:  2024-05-24
Resettable Statistical Zero-Knowledge for NP
Susumu Kiyoshima
Resettable statistical zero-knowledge [Garg--Ostrovsky--Visconti--Wadia, TCC 2012] is a strong privacy notion that guarantees statistical zero-knowledge even when the prover uses the same randomness in multiple proofs. In this paper, we show an equivalence of resettable statistical zero-knowledge arguments for $NP$ and witness encryption schemes for $NP$. - Positive result: For any $NP$ language $L$, a resettable statistical zero-knowledge argument for $L$ can be constructed from a witness encryption scheme for $L$ under the assumption of the existence of one-way functions. - Negative result: The existence of even resettable statistical witness-indistinguishable arguments for $NP$ imply the existence of witness encryption schemes for $NP$ under the assumption of the existence of one-way functions. The positive result is obtained by naturally extending existing techniques (and is likely to be already well-known among experts). The negative result is our main technical contribution. To explore workarounds for the negative result, we also consider resettable security in a model where the honest party's randomness is only reused with fixed inputs. We show that resettable statistically hiding commitment schemes are impossible even in this model.
Last updated:  2024-05-24
DiTRU: A Resurrection of NTRU over Dihedral Group
Ali Raya, Vikas Kumar, and Sugata Gangopadhyay
NTRU-like cryptosystems are among the most studied lattice-based post-quantum candidates. While most NTRU proposals have been introduced over a commutative ring of quotient polynomials, other rings can be used. Noncommutative algebra has been endorsed as a direction to build new variants of NTRU a long time ago. The first attempt to construct a noncommutative variant was due to Hoffstein and Silverman motivated by more resistance to lattice attack. The scheme has been built over the group ring of a dihedral group. However, their design differed from standard NTRU and soon was found vulnerable to algebraic attacks. In this work, we revive the group ring NTRU over the dihedral group as an instance of the GR-NTRU framework. Unlike many proposals of noncommutative variants in the literature, our work focuses on putting the scheme into practice. We clear all the aspects that make our scheme implementable by proposing an efficient inversion algorithm over the new setting of the noncommutative ring, describing the decryption failure model, and analyzing the lattice associated with our instantiation. Finally, we discuss the best-known attacks against our scheme and provide an implementation targeting 128-bit, 192-bit, and 256-bit levels of security as proof of its practicality.
Last updated:  2024-05-24
Analysis on Sliced Garbling via Algebraic Approach
Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy~(Crypto~2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans~(Eurocrypt~2015). Recently, Ashur, Hazay, and Satish~(eprint 2024/389) proposed a scheme that requires $4/3\kappa + O(1)$ bits for garbling AND gates. Precisely they extended the idea of \emph{slicing} introduced by Rosulek and Roy to garble 3-input gates of the form $g(u,v,w) := u(v+w)$. By setting $w = 0$, it can be used to garble AND gates with the improved communication costs. However, in this paper, we observe that the scheme proposed by Ashur, Hazy, and Satish leaks information on the permute bits, thereby allowing the evaluator to reveal information on the private inputs. To be precise, we show that in their garbling scheme, the evaluator can compute the bits $\alpha$ and $\beta + \gamma$, where $\alpha$, $\beta$, and $\gamma$ are the private permute bits of the input labels $A$, $B$, and $C$, respectively.
Last updated:  2024-05-24
Can We Beat Three Halves Lower Bound?: (Im)Possibility of Reducing Communication Cost for Garbled Circuits
Chunghun Baek and Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015). Whether their construction is optimal in a more inclusive model than the linear garbling model still remains open. This paper begins by providing a comprehensive model for a large class of practical garbling schemes and proves the lower bound for the size of the garbled AND gates in our model. We show that garbled AND gates require at least $1.5\kappa$ bits in our new model with the free-XOR setting. It is remarkable to see that the construction by Rosulek and Roy is already optimal despite the fact that our model possibly captures any potential extension of their construction.
Last updated:  2024-05-26
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2024-05-23
Algebraic Structure of the Iterates of $\chi$
Björn Kriepke and Gohar Kyureghyan
We consider the map $\chi:\mathbb{F}_2^n\to\mathbb{F}_2^n$ for $n$ odd given by $y=\chi(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$. We suggest a generalization of the map $\chi$ which we call generalized $\chi$-maps. We show that these maps form an Abelian group which is isomorphic to the group of units in $\mathbb{F}_2[X]/(X^{(n+1)/2})$. Using this isomorphism we easily obtain closed-form expressions for iterates of $\chi$ and explain their properties.
Last updated:  2024-05-23
A Note on Zero-Knowledge for NP and One-Way Functions
Yanyi Liu, Noam Mazor, and Rafael Pass
We present a simple alternative exposition of the the recent result of Hirahara and Nanashima (STOC’24) showing that one-way functions exist if (1) every language in NP has a zero-knowledge proof/argument and (2) ZKA contains non-trivial languages. Our presentation does not rely on meta-complexity and we hope it may be useful for didactic purposes.
Last updated:  2024-05-23
Symmetric Signcryption and E2EE Group Messaging in Keybase
Joseph Jaeger, Akshaya Kumar, and Igors Stepanovs
We introduce a new cryptographic primitive called symmetric signcryption, which differs from traditional signcryption because the sender and recipient share a secret key. We prove that a natural composition of symmetric encryption and signatures achieves strong notions of security against attackers that can learn and control many keys. We then identify that the core encryption algorithm of the Keybase encrypted messaging protocol can be modeled as a symmetric signcryption scheme. We prove the security of this algorithm, though our proof requires assuming non-standard, brittle security properties of the underlying primitives.
Last updated:  2024-05-23
Incompressible Functional Encryption
Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, and Aman Verma
Incompressible encryption (Dziembowski, Crypto'06; Guan, Wichs, Zhandry, Eurocrypt'22) protects from attackers that learn the entire decryption key, but cannot store the full ciphertext. In incompressible encryption, the attacker must try to compress a ciphertext within pre-specified memory bound $S$ before receiving the secret key. In this work, we generalize the notion of incompressibility to functional encryption. In incompressible functional encryption, the adversary can corrupt non-distinguishing keys at any point, but receives the distinguishing keys only after compressing the ciphertext to within $S$ bits. An important efficiency measure for incompressible encryption is the ciphertext-rate ( i.e., $\mathsf{rate} = \frac{|m|}{|\mathsf{ct}|}\;$). We give many new results for incompressible functional encryption: 1. Incompressible attribute-based encryption for circuits from standard assumptions, with $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, 2. Incompressible functional encryption for circuits from (non-incompressible) functional encryption, with (a) $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, (b) $\mathsf{ct}$-rate-$1$ and large secret keys. Our results achieve optimal efficiency, as incompressible attribute-based/functional encryption with $\mathsf{ct}$-rate-$1$ as well as short secret keys has strong implausibility barriers. Moreover, our assumptions are minimal as incompressible attribute-based/functional encryption are strictly stronger than their non-incompressible counterparts.
Last updated:  2024-05-25
Nonadaptive One-Way to Hiding Implies Adaptive Quantum Reprogramming
Joseph Jaeger
An important proof technique in the random oracle model involves reprogramming it on hard to predict inputs and arguing that an attacker cannot detect that this occurred. In the quantum setting, a particularly challenging version of this considers adaptive reprogramming wherein the points to be reprogrammed (or output values they should be programmed to) are dependent on choices made by the adversary. Frameworks for analyzing adaptive reprogramming were given by, e.g., by Unruh (CRYPTO 2014), Grilo-Hövelmanns-Hülsing-Majenz (ASIACRYPT 2021), and Pan-Zeng (PKC 2024). We show, counterintuitively, that these adaptive results follow directly from the non-adaptive one-way to hiding theorem of Ambainis-Hamburg-Unruh (CRYPTO 2019). These implications contradict beliefs (whether stated explicitly or implicitly) that some properties of the adaptive frameworks cannot be provided by the Ambainis-Hamburg-Unruh result.
Last updated:  2024-05-23
Weak Consistency mode in Key Transparency: OPTIKS
Esha Ghosh and Melissa Chase
The need for third-party auditors in privacy-preserving Key Transparency (KT) systems presents a deployment challenge. In this short note, we take a simple privacy-preserving KT system that provides strong security guarantees in the presence of an honest auditor (OPTIKS) and show how to add a auditor-free mode to it. The auditor-free mode offers slightly weaker security. We formalize this security property and prove that our proposed protocol satisfies our security definition.
Last updated:  2024-05-22
New Limits of Provable Security and Applications to ElGamal Encryption
Sven Schäge
We provide new results showing that ElGamal encryption cannot be proven CCA1-secure – a long-standing open problem in cryptography. Our result follows from a very broad, meta-reduction-based impossibility result on random self-reducible relations with efficiently re-randomizable witnesses. The techniques that we develop allow, for the first time, to provide impossibility results for very weak security notions where the challenger outputs fresh challenge statements at the end of the security game. This can be used to finally tackle encryption-type definitions that have remained elusive in the past. We show that our results have broad applicability by casting several known cryptographic setups as instances of random self-reducible and re-randomizable relations. These setups include general semi-homomorphic PKE and the large class of certified homomorphic one-way bijections. As a result, we also obtain new impossibility results for the IND-CCA1 security of the PKEs of Paillier and Damg ̊ard–Jurik, and many one-more inversion assumptions like the one-more DLOG or the one-more RSA assumption.
Last updated:  2024-05-24
Detecting Rogue Decryption in (Threshold) Encryption via Self-Incriminating Proofs
James Hsin-yu Chiang, Bernardo David, Tore Kasper Frederiksen, Arup Mondal, and Esra Yeniaras
Keeping decrypting parties accountable in public key encryption is notoriously hard since the secret key owner can decrypt any arbitrary ciphertext. Threshold encryption aims to solve this issue by distributing the power to decrypt among a set of parties, who must interact via a decryption protocol. However, such parties can employ cryptographic tools such as Multiparty Computation (MPC) to decrypt arbitrary ciphertexts without being detected. We introduce the notion of (threshold) encryption with Self-Incriminating Proofs, where parties must produce a self-incriminating proof of decryption when decrypting every ciphertext. In the standard public key encryption case, the adversary could destroy these proofs, so we strengthen our notion to guarantee that the proofs are published when decryption succeeds. This creates a decryption audit trail, which is useful in scenarios where decryption power is held by a single trusted party (e.g., a Trusted Execution Environment) who must be kept accountable. In the threshold case, we ensure that at least one of the parties who execute the decryption protocol will learn a self-incriminating proof, even if they employ advanced tools such as MPC. The fact that a party learns the proof and may leak it at any moment functions as a deterrent for parties who do not wish to be identified as malicious decryptors (e.g., a commercial operator of a service based on threshold encryption). We investigate the (im)possibility and applications of our notions while providing matching constructions under appropriate assumptions. In the threshold case, we build on recent results on Individual Cryptography (CRYPTO 2023).
Last updated:  2024-05-22
Hide-and-Seek and the Non-Resignability of the BUFF Transform
Jelle Don, Serge Fehr, Yu-Hsuan Huang, Jyun-Jie Liao, and Patrick Struck
The BUFF transform, due to Cremers et al. (S&P'21), is a generic transformation for digital signature scheme, with the purpose of obtaining additional security guarantees beyond unforgeability: exclusive ownership, message-bound signatures, and non-resignability. Non-resignability (which essentially challenges an adversary to re-sign an unknown message for which it only obtains the signature) turned out to be a delicate matter, as recently Don et al. (CRYPTO'24) showed that the initial definition is essentially unachievable; in particular, it is not achieved by the BUFF transform. This led to the introduction of new, weakened versions of non-resignability, which are (potentially) achievable. In particular, it was shown that a salted variant of the BUFF transform does achieves some weakened version of non-resignability. However, the salting requires additional randomness and leads to slightly larger signatures. Whether the original BUFF transform also achieves some meaningful notion of non-resignability remained a natural open question. In this work, we answer this question in the affirmative. We show that the BUFF transform satisfies the (almost) strongest notions of non-resignability one can hope for, facing the known impossibility results. Our results cover both the statistical and the computational case, and both the classical and the quantum setting. At the core of our analysis lies a new security game for random oracles that we call Hide-and-Seek. While seemingly innocent at first glance, it turns out to be surprisingly challenging to rigorously analyze.
Last updated:  2024-05-22
Stickel's Key Agreement Algebraic Variation
Daniel Nager
In this document we present a further development of non-commutative algebra based key agreement due to E. Stickel and a way to deal with the algebraic break due to V. Sphilrain.
Last updated:  2024-05-25
Minimize the Randomness in Rasta-Like Designs: How Far Can We Go?
Lorenzo Grassi, Fukang Liu, Christian Rechberger, Fabian Schmid, Roman Walch, and Qingju Wang
The Rasta design strategy allows building low-round ciphers due to its efficient prevention of statistical attacks and algebraic attacks by randomizing the cipher, which makes it especially suitable for hybrid homomorphic encryption (HHE), also known as transciphering. Such randomization is obtained by pseudorandomly sampling new invertible matrices for each round of each new cipher evaluation. However, naively sampling a random invertible matrix for each round significantly impacts the plain evaluation runtime, though it does not impact the homomorphic evaluation cost. To address this issue, Dasta was proposed at ToSC 2020 to reduce the cost of generating the random matrices. In this work, we address this problem from a different perspective: How far can the randomness in Rasta-like designs be reduced in order to minimize the plain evaluation runtime without sacrificing the security? To answer this question, we carefully studied the main threats to Rasta-like ciphers and the role of random matrices in ensuring security. We apply our results to the recently proposed cipher $\text{PASTA}$, proposing a modified version called $\text{PASTA}_\text{v2}$ instantiated with one initial random matrix and fixed linear layers - obtained by combining two MDS matrices with the Kronecker product - for the other rounds. Compared with $\text{PASTA}$, the state-of-the-art cipher for BGV- and BFV-style HHE, our evaluation shows that $\text{PASTA}_\text{v2}$ is up to 100% faster in plain while having the same homomorphic runtime in the SEAL homomorphic encryption library and up to 30% faster evaluation time in HElib, respectively.
Last updated:  2024-05-22
Physical Ring Signature
Xavier Bultel
Ring signatures allow members of a group (called "ring") to sign a message anonymously within the group, which is chosen ad hoc at the time of signing (the members do not need to have interacted before). In this paper, we propose a physical version of ring signatures. Our signature is based on one-out-of-many signatures, a method used in many real cryptographic ring signatures. It consists of boxes containing coins locked with padlocks that can only be opened by a particular group member. To sign a message, a group member shakes the boxes of the other members of the group so that the coins are in a random state ("heads" or "tails", corresponding to bits $0$ and $1$), and opens their box to arrange the coins so that the exclusive "or" of the coins corresponds to the bits of the message they wish to sign. We present a prototype that can be used with coins, or with dice for messages encoded in larger (non-binary) alphabets. We suggest that this system can be used to explain ring signatures to the general public in a fun way. Finally, we propose a semi-formal analysis of the security of our signature based on real cryptographic security proofs.
Last updated:  2024-05-22
FairSec: Fair and Maliciously Secure Circuit-PSI via SPDZ-Compatible Oblivious PRF
Yaxi Yang, Xiaojian Liang, Xiangfu Song, Linting Huang, Hongyu Ren, Changyu Dong, and Jianying Zhou
Private Set Intersection (PSI) allows two parties to compute the intersection of their input sets without revealing more information than the computation results. PSI and its variants have numerous applications in practice. Circuit-PSI is a famous variant and aims to compute any functionality $f$ on items in the intersection set. However, the existing circuit-PSI protocols only provide security against \emph{semi-honest} adversaries. One straightforward solution is to extend a pure garbled-circuit-based PSI (NDSS'12) to a maliciously secure circuit-PSI, but it will result in non-concrete complexity. Another is converting state-of-the-art semi-honest circuit-PSI protocols (EUROCRYPT'21; PoPETS'22) to be secure in the malicious setting. However, it will come across \emph{the consistency issue} since parties can not guarantee the inputs of functionality $f$ stay unchanged as obtained from the last step. This paper addresses the aforementioned issue by introducing FairSec, the first malicious circuit-PSI protocol. The central building block of FairSec, called Distributed Dual-key Oblivious PRF (DDOPRF), provides an oblivious evaluation of secret-shared inputs with dual keys. Additionally, we ensure the compatibility of DDOPRF with SPDZ, enhancing the versatility of our circuit-PSI protocol. Notably, our construction allows us to guarantee fairness within circuit-PSI effortlessly. Importantly, FairSec also achieves linear computation and communication complexities.
Last updated:  2024-05-22
A Fault-Resistant NTT by Polynomial Evaluation and Interpolation
Uncategorized
Sven Bauer, Fabrizio De Santis, Kristjane Koleci, and Anita Aghaie
Show abstract
Uncategorized
In computer arithmetic operations, the Number Theoretic Transform (NTT) plays a significant role in the efficient implementation of cyclic and nega-cyclic convolutions with the application of multiplying large integers and large degree polynomials. Multiplying polynomials is a common operation in lattice-based cryptography. Hence, the NTT is a core component of several lattice-based cryptographic algorithms. Two well-known examples are the key encapsulation mechanism Kyber and the digital signature algorithm Dilithium. In this work, we introduce a novel and efficient method for safeguarding the NTT against fault attacks. This new countermeasure is based on polynomial evaluation and interpolation. We prove its error detection capability, calculate the required additional computational effort, and show how to concretely use it to secure the NTT in Kyber and Dilithium against fault injection attacks. Finally, we provide concrete implementation results of the proposed novel technique on a resource-constrained ARM Cortex-M4 microcontroller, e.g., the technique exhibits a 72% relative overhead, when applied to Dilithium.
Last updated:  2024-05-22
A new attack against search-LWE using Diophantine approximations
Robin Frot and Daniel Zentai
In this paper, we present a new attack against search-LWE instances with a small secret key. The method consists of lifting the public key to $\mathbb Z$ and finding a good Diophantine approximation of the public key divided by the modulus $a$. This is done using lattice reduction algorithms. The lattice considered, and the approximation quality needed is similar to known decision-LWE attacks for small keys. However, we do not require an in-depth analysis of the reduction algorithm (any reduction algorithm giving small enough vectors is enough for us), and our method solves the search problem directly, which is harder than the decision problem.
Last updated:  2024-05-22
Modelling Ciphers with Overdefined Systems of Quadratic Equations: Application to Friday, Vision, RAIN and Biscuit
Fukang Liu, Mohammad Mahzoun, and Willi Meier
It is well-known that a system of equations becomes easier to solve when it is overdefined. In this work, we study how to overdefine the system of equations to describe the arithmetic oriented (AO) ciphers Friday, Vision, and RAIN, as well as a special system of quadratic equations over $\mathbb F_{2^{\ell}}$ used in the post-quantum signature scheme Biscuit. Our method is inspired by Courtois-Pieprzyk's and Murphy-Robshaw's methods to model AES with overdefined systems of quadratic equations over $\mathbb F_2$ and $\mathbb F_{2^8}$, respectively. However, our method is more refined and much simplified compared with Murphy-Robshaw's method, since it can take full advantage of the low-degree $\mathbb F_2$-linearized affine polynomials used in Friday and Vision, and the overdefined system of equations over $\mathbb F_{2^{\ell}}$ can be described in a clean way with our method. For RAIN, we instead consider quadratic Boolean equations rather than equations over large finite fields $\mathbb F_{2^{\ell}}$. Specifically, we demonstrate that the special structure of RAIN allows us to set up much more linearly independent quadratic Boolean equations than those obtained only with Courtois-Pieprzyk's method. Moreover, we further demonstrate that the underlying key-recovery problem in Biscuit (NIST PQC Round 1 Additional Signatures) can also be described by solving a much overdefined system of quadratic equations over $\mathbb F_{2^{\ell}}$. On the downside, the constructed systems of quadratic equations for these ciphers cannot be viewed as semi-regular, which makes it challenging to upper bound the complexity of the Gröbner basis attack. However, such a new modelling method can significantly improve the lower bound of the complexity of the Gröbner basis attacks on these ciphers, i.e., we view the complexity of solving a random system of quadratic equations of the same scale as the lower bound. How to better estimate the upper and lower bounds of the Gröbner basis attacks on these ciphers based on our modelling method is left as an open problem.
Last updated:  2024-05-22
SmartBean: Transparent, Concretely Efficient, Polynomial Commitment Scheme with Logarithmic Verification and Communication Costs that Runs on Any Group
Frank Y.C. Lu
We introduce a new, concretely efficient, transparent polynomial commitment scheme with logarithmic verification time and communication cost that can run on any group. Existing group-based polynomial commitment schemes must use less efficient groups, such as class groups of unknown order or pairing-based groups to achieve transparency (no trusted setup), making them expensive to adopt in practice.  We offer the first group-based polynomial commitment scheme that can run on any group s.t. it does not rely on expensive pairing operations or require class groups of unknown order to achieve transparency while still providing logarithmic verifier time and communication cost.  The prover work of our work is dominated by $4n \,\mathbb{G}$ multi-exponentiations, the verifier work is dominated by 4 log $n \, \mathbb{G}$ exponentiations, and the communication cost is 4 log $n \, \mathbb{G}$. Since our protocol can run on fast groups such as Curve25519, we can easily accelerate the multi-exponentiations with Pippenger's algorithm. The concrete performance of our work shows a significant improvement over the current state of the art in almost every aspect.
Last updated:  2024-05-22
Universal Blockchain Assets
Owen Vaughan
We present a novel protocol for issuing and transferring tokens across blockchains without the need of a trusted third party or cross-chain bridge. In our scheme, the blockchain is used for double-spend protection only, while the authorisation of token transfers is performed off-chain. Due to the universality of our approach, it works in almost all blockchain settings. It can be implemented immediately on UTXO blockchains such as Bitcoin without modification, and on account-based blockchains such as Ethereum by introducing a smart contract that mimics the properties of a UTXO. We provide a proof-of-concept implementation of an NFT that is issued on Bitcoin, transferred to Ethereum, and then transferred back to Bitcoin. Our new approach means that users no longer need to be locked into one blockchain when issuing and transferring tokens.
Last updated:  2024-05-24
Differential Cryptanalysis on Quantum Computers
Kyungbae Jang, Yujin Oh, and Hwajeong Seo
As quantum computing progresses, extensive research has been conducted to find quantum advantages in the field of cryptography. Combining quantum algorithms with classical cryptographic analysis methods, such as differential cryptanalysis and linear cryptanalysis, has the potential to reduce complexity. In this paper, we present a quantum differential finding circuit for differential cryptanalysis. In our quantum circuit, both plaintext and input difference are in a superposition state. Actually, while our method cannot achieve a direct speedup with quantum computing, it offers a different perspective by relying on quantum probability in a superposition state. For the quantum simulation, given the limited number of qubits, we simulate our quantum circuit by implementing the Toy-ASCON quantum circuit.
Last updated:  2024-05-28
Relating Code Equivalence to Other Isomorphism Problems
Huck Bennett and Kaung Myat Htay Win
We study the complexity of the Code Equivalence Problem on linear error-correcting codes by relating its variants to isomorphism problems on other discrete structures---graphs, lattices, and matroids. Our main results are a fine-grained reduction from the Graph Isomorphism Problem to the Linear Code Equivalence Problem over any field $\mathbb{F}$, and a reduction from the Linear Code Equivalence Problem over any field $\mathbb{F}_p$ of prime, polynomially bounded order $p$ to the Lattice Isomorphism Problem. Both of these reductions are simple and natural. We also give reductions between variants of the Code Equivalence Problem, and study the relationship between isomorphism problems on codes and linear matroids.
Last updated:  2024-05-21
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
Or Keret, Ron D. Rothblum, and Prashant Nalini Vasudevan
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem $\Pi$ admitting a non-interactive statistical zero-knowledge proof (NISZK) has an efficient zero-knowledge batch verification protocol. Namely, an NISZK protocol for proving that $x_1,\dots,x_k \in \Pi$ with communication that only scales poly-logarithmically with $k$. A caveat of this line of work is that the prover runs in exponential-time, whereas for NP problems it is natural to hope to obtain a doubly-efficient proof - that is, a prover that runs in polynomial-time given the $k$ NP witnesses. In this work we show that every problem in $NISZK \cap UP$ has a doubly-efficient interactive statistical zero-knowledge proof with communication $poly(n,\log(k))$ and $poly(\log(k),\log(n))$ rounds. The prover runs in time $poly(n,k)$ given access to the $k$ UP witnesses. Here $n$ denotes the length of each individual input, and UP is the subclass of NP relations in which YES instances have unique witnesses. This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
Last updated:  2024-05-21
Information-theoretic Multi-server Private Information Retrieval with Client Preprocessing
Jaspal Singh, Yu Wei, and Vassilis Zikas
A private information retrieval (PIR) protocol allows a client to fetch any entry from single or multiple servers who hold a public database (of size $n$) while ensuring no server learns any information about the client's query. Initial works on PIR were focused on reducing the communication complexity of PIR schemes. However, standard PIR protocols are often impractical to use in applications involving large databases, due to its inherent large server-side computation complexity, that's at least linear in the database size. Hence, a line of research has focused on considering alternative PIR models that can achieve improved server complexity. The model of private information retrieval with client prepossessing has received a lot of interest beginning with the work due to Corrigan-Gibbs and Kogan (Eurocrypt 2020). In this model, the client interacts with two servers in an offline phase and it stores a local state, which it uses in the online phase to perform PIR queries. Constructions in this model achieve online client/server computation and bandwidth that's sublinear in the database size, at the cost of a one-time expensive offline phase. Till date all known constructions in this model are based on symmetric key primitives or on stronger public key assumptions like Decisional Diffie-Hellman (DDH) and Learning with Error (LWE). This work initiates the study of unconditional PIR with client prepossessing - where we avoid using any cryptographic assumptions. We present a new PIR protocol for $2t$ servers (where $t \in [2,\log_2n/2]$) with threshold 1, where client and server online computation is $\widetilde{O}(\sqrt{n})$ (the $\widetilde{O}(.)$ notation hides $\textsf{poly}\log$ factors) - matching the computation costs of other works based on cryptographic assumptions. The client storage and online communication complexity are $\widetilde{O}(n^{0.5+1/2t})$ and $\widetilde{O}(n^{1/2})$ respectively. Compared to previous works our PIR with client preprocessing protocol also has a very concretely efficient client/server online computation phase - which is dominated by xor operations, compared to cryptographic operations that are orders of magnitude slower. As a building block for our construction, we introduce a new information-theoretic primitive called \textit{privately multi-puncturable random set }(\pprs), which might be of independent interest. This new primitive can be viewed as as a generalization of privately puncturable pseudo-random set, which is the key cryptographic building block used in previous works on PIR with client preprocessing.
Last updated:  2024-05-21
Elliptic Curve Cryptography for the masses: Simple and fast finite field arithmetic
Michael Scott
Shaped prime moduli are often considered for use in elliptic curve and isogeny-based cryptography to allow for faster modular reduction. Here we focus on the most common choices for shaped primes that have been suggested, that is pseudo-Mersenne, generalized Mersenne and Montgomery-friendly primes. We consider how best to to exploit these shapes for maximum efficiency, and provide an open source tool to automatically generate, test and time working high-level language finite-field code. Next we consider the advantage to be gained from implementations that are written in assembly language and which exploit special instructions, SIMD hardware if present, and the particularities of the algorithm being implemented.
Last updated:  2024-05-21
Ideal-to-isogeny algorithm using 2-dimensional isogenies and its application to SQIsign
Hiroshi Onuki and Kohei Nakagawa
The Deuring correspondence is a correspondence between supersingular elliptic curves and quaternion orders. Under this correspondence, an isogeny between elliptic curves corresponds to a quaternion ideal. This correspondence plays an important role in isogeny-based cryptography and several algorithms to compute an isogeny corresponding to a quaternion ideal (ideal-to-isogeny algorithms) have been proposed. In particular, SQIsign is a signature scheme based on the Deuring correspondence and uses an ideal-to-isogeny algorithm. In this paper, we propose a novel ideal-to-isogeny algorithm using isogenies of dimension $2$. Our algorithm is based on Kani's reducibility theorem, which gives a connection between isogenies of dimension $1$ and $2$. By using the characteristic $p$ of the base field of the form $2^fg - 1$ for a small odd integer $g$, our algorithm works by only $2$-isogenies and $(2, 2)$-isogenies in the operations in $\mathbb{F}_{p^2}$. We apply our algorithm to SQIsign and compare the efficiency of the new algorithm with the existing one. Our analysis shows that the key generation and the signing in our algorithm are at least twice as fast as those in the existing algorithm at the NIST security level 1. This advantage becomes more significant at higher security levels. In addition, our algorithm also improves the efficiency of the verification in SQIsign.
Last updated:  2024-05-25
Measure-Rewind-Extract: Tighter Proofs of One-Way to Hiding and CCA Security in the Quantum Random Oracle Model
Jiangxia Ge, Heming Liao, and Rui Xue
The One-Way to Hiding (O2H) theorem, first given by Unruh (J ACM 2015) and then restated by Ambainis et al. (CRYPTO 2019), is a crucial technique for solving the reprogramming problem in the quantum random oracle model (QROM). It provides an upper bound $d\cdot\sqrt{\epsilon}$ for the distinguisher's advantage, where $d$ is the query depth and $\epsilon$ denotes the advantage of a one-wayness attacker. Later, in order to obtain a tighter upper bound, Kuchta et al. (EUROCRYPT 2020) proposed the Measure-Rewind-Measure (MRM) technique and then proved the Measure-Rewind-Measure O2H (MRM-O2H) theorem, which provides the upper bound $d\cdot\epsilon$. They also proposed an open question: Can we combine their MRM technique with Ambainis et al.'s semi-classical oracle technique (CRYPTO 2019) or Zhandry's compressed oracle technique (CRYPTO 2019) to prove a new O2H theorem with an upper bound even tighter than $d\cdot\epsilon$? In this paper, we give an affirmative answer for the above question. We propose a new technique named Measure-Rewind-Extract (MRE) by combining the MRM technique with the semi-classical oracle technique. By using MRE technique, we prove the Measure-Rewind-Extract O2H (MRE-O2H) theorem, which provides the upper bound $\sqrt{d}\cdot\epsilon$. As an important application of our MRE-O2H theorem, for the $FO^{\cancel{\bot}}$, $FO_m^{\cancel{\bot}}$, $FO^{\bot}$ and $FO_m^\bot$ proposed by Hofheinz et al. (TCC 2017), i.e., the key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto transformation, we prove the following results in the QROM: Their IND-CCA security can be reduced to the IND-CPA security of the underlying public key encryption (PKE) scheme without the square-root advantage loss. In particular, compared with the IND-CCA proof of $FO^{\cancel{\bot}}$ given by Kuchta et al. (EUROCRYPT 2020), ours removes the injectivity assumption and has a tighter security bound. Under the assumption that the underlying PKE scheme is unique randomness recoverable, we for the first time prove that their IND-CCA security can be reduced to the OW-CPA security of the underlying PKE scheme without the square-root advantage loss.
Last updated:  2024-05-21
Instance-Hiding Interactive Proofs
Changrui Mu and Prashant Nalini Vasudevan
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in AM/poly and coAM/poly. 2. If an average-case hard language has an IHIP, then One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Strong IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Strong IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Strong IHIP, then One-Way Functions exist.
Last updated:  2024-05-20
Spec-o-Scope: Cache Probing at Cache Speed
Uncategorized
Gal Horowitz, Eyal Ronen, and Yuval Yarom
Show abstract
Uncategorized
Over the last two decades, microarchitectural side channels have been the focus of a large body of research on the development of new attack techniques, exploiting them to attack various classes of targets and designing mitigations. One line of work focuses on increasing the speed of the attacks, achieving higher levels of temporal resolution that can allow attackers to learn finer-grained information. The most recent addition to this line of work is Prime+Scope [CCS '21], which only requires a single access to the L1 cache to confirm the absence of victim activity in a cache set. While significantly faster than prior attacks, Prime+Scope is still an order of magnitude slower than cache access. In this work, we set out to close this gap. We draw on techniques from research into microarchitectural weird gates, software constructs that exploit transient execution to perform arbitrary computation on cache state. We design the Spec-o-Scope gate, a new weird gate that performs 10 cache probes in quick succession, which forms the basis for our eponymous attack. Our Spec-o-Scope attack achieves an order of magnitude improvement in temporal resolution compared to the previous state-of-the-art of Prime+Scope, reducing the measurement time from ~70 cycles to only 5 --- only one cycle more than an L1 cache access. We experimentally verify that our attack can detect timing differences in a 5 cycle resolution. Finally, using our Spec-o-Scope attack, we are able to show the first microarchitectural side-channel attack on an unmodified AES S-box-based implementation, which uses generic CPU features and does not require manipulation of the operating system's scheduler.
Last updated:  2024-05-20
Byzantine Reliable Broadcast with One Trusted Monotonic Counter
Yackolley Amoussou-Guenou, Lionel Beltrando, Maurice Herlihy, and Maria Potop-Butucaru
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator even if some processes (perhaps including the initiator) are Byzantine. In asynchronous settings it is known since the prominent work of Bracha [Bracha87] that Byzantine reliable broadcast can be implemented deterministically if $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with trusted execution environments (TEEs), special software or hardware designed to prevent equivocation. Our contribution is twofold. First, we show that, despite common belief, when each process is equipped with a TEE, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single TEE (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
Last updated:  2024-05-20
SQIPrime: A dimension 2 variant of SQISignHD with non-smooth challenge isogenies
Max Duparc and Tako Boris Fouotsa
We introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to both key generation and commitment. In doing so, it no longer relies on smooth degree isogenies (of dimension 1). SQIPrime operates with a prime number of the form $p = 2^\alpha f-1$, as opposed to SQISignHD that uses SIDH primes. The most intriguing novelty in SQIPrime is the use of non-smooth degree isogenies as challenge isogeny. In fact, in the SQISign family identification scheme, the challenge isogeny is computed by the verifier, who is not well-equipped to compute an isogeny of large non-smooth degree. To overcome this obstacle, the verifier samples the kernel of the challenge isogeny and the task of computing this isogeny is accomplished by the prover. The response is modified in such a way that the verifier can check that his challenge isogeny was correctly computed by the prover, on top of verifying the usual response in the SQISign family. We describe two variants of SQIPrime: SQIPrime4D which uses dimension 4 isogenies to represent the response isogeny, and SQIPrime2D which solely uses dimension 2 isogenies to represent the response isogeny and hence is more efficient compared to SQIPrime4D and to SQISignHD.
Last updated:  2024-05-20
Reducing the Share Size of Weighted Threshold Secret Sharing Schemes via Chow Parameters Approximation
Oriol Farràs and Miquel Guiot
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets. In a weighted threshold access structure, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the share size of the best known secret sharing schemes is either linear on the weights or quasipolynomial on the number of parties, which leads to long shares, in general. In certain settings, a way to circumvent this efficiency problem is to approximate the access structure by another one that admits more efficient schemes. This work is dedicated to the open problem posed by this strategy: Finding secret sharing schemes with a good tradeoff between the efficiency and the accuracy of the approximation. We present a method to approximate weighted threshold access structures by others that admit schemes with small shares. This method is based on the techniques for the approximation of the Chow parameters developed by De et al. [Journal of the ACM, 2014]. Our method provides secret sharing schemes with share size $n^{1+o(1)}$, where $n$ is the number of parties, and whose access structure is close to the original one. Namely, in this approximation the condition of being authorized or not is preserved for almost all subsets of parties. In addition, applying the recent results on computational secret sharing schemes by Applebaum et al. [STOC, 2023] we show that there exist computational secret sharing schemes whose security is based on the RSA assumption and whose share size is polylogarithmic in the number of parties.
Last updated:  2024-05-20
SQIsign2D-East: A New Signature Scheme Using 2-dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification. In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.
Last updated:  2024-05-20
Sublinear-Round Broadcast without Trusted Setup
Andreea B. Alexandru, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, and Benedikt Wagner
Byzantine broadcast is one of the fundamental problems in distributed computing. Many of its practical applications, from multiparty computation to consensus mechanisms for blockchains, require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users $n$. This rules out existing solutions which run in a linear number of rounds in $n$ or rely on trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol for the dishonest majority setting. Unlike previous sublinear-round protocols, our protocol assumes neither the existence of a trusted dealer who honestly issues keys and correlated random strings to the parties nor random oracles. Instead, we present a solution whose setup is limited to an unstructured uniform reference string and a plain public key infrastructure (a.k.a. bulletin-board PKI). Our broadcast protocol builds on top of a moderated gradecast protocol which parties can use to reach weak agreement on shared random strings. Using these strings, we can then run in an unbiased fashion a committee-based Byzantine protocol, similar to that of Chan et al. (PKC 2020), which terminates in a sublinear number of rounds. To this end, we propose a novel construction for committee election, which does not rely either on random oracles or on a trusted setup, and uses NIZKs and time-lock puzzles. Our protocol is resilient against an adaptive adversary who corrupts any constant fraction of parties.
Last updated:  2024-05-23
Time-Based Cryptography From Weaker Assumptions: Randomness Beacons, Delay Functions and More
Damiano Abram, Lawrence Roy, and Mark Simkin
The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023). In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.
Last updated:  2024-05-20
The Ouroboros of ZK: Why Verifying the Verifier Unlocks Longer-Term ZK Innovation
Denis Firsov and Benjamin Livshits
Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.
Last updated:  2024-05-30
Bootstrapping Bits with CKKS
Youngjin Bae, Jung Hee Cheon, Jaehyung Kim, and Damien Stehlé
The Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data. In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations. In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.
Last updated:  2024-05-20
Breaking Verifiable Delay Functions in the Random Oracle Model
Ziyi Guan, Artur Riazanov, and Weiqiang Yuan
A verifiable delay function (VDF) is a cryptographic primitive that takes a long time to compute, but produces a unique output that is efficiently and publicly verifiable. Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both perfect completeness and adaptive perfect uniqueness do not exist in the random oracle model. Moreover, Ephraim, Freitag, Komargodski, and Pass (EUROCRYPT 2020) construct a VDF with perfect completeness and computational uniqueness, a much weaker guarantee compare to perfect uniqueness, in the random oracle model under the repeated squaring assumption. In this work, we close the gap between existing constructions and known lower bounds by showing that VDFs with imperfect completeness and non-adaptive computational uniqueness cannot be constructed in the pure random oracle model (without additional computational assumptions).
Last updated:  2024-05-19
Information-Theoretic Multi-Server PIR with Global Preprocessing
Ashrujit Ghoshal, Baitian Li, Yaohua Ma, Chenxin Dai, and Elaine Shi
We propose a new unified framework to construct multi-server, information-theoretic Private Information Retrieval (PIR) schemes that leverage global preprocesing to achieve sublinear computation per query. Despite a couple earlier attempts, our understanding of PIR schemes in the global preprocessing model remains limited, and so far, we only know a few sparse points in the broad design space. With our new unified framework, we can generalize the results of Beimel, Ishai, and Malkin to broader parameter regimes, thus enabling a tradeoff between bandwidth and computation. Specifically, for any constant $S > 1$, we can get an $S$-server scheme whose bandwidth consumption is as small as $n^{1/(S+1) + \epsilon}$ while achieving computation in the $n^\delta$ regime for some constant $\delta \in (0, 1)$. Moreover, we can get a scheme with polylogarithmic bandwidth and computation, requiring only polylogarithmic number of servers.
Last updated:  2024-05-19
Decentralized Multi-Client Functional Encryption with Strong Security
Ky Nguyen, David Pointcheval, and Robert Schädlich
Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process. In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees in the random oracle model: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same message tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same key tags in the key-queries, allowing static corruption of the secret encryption keys. Previous constructions were proven secure in the selective setting only.
Last updated:  2024-05-19
On SIS-problem-based random Feistel ciphers and its statistical evaluation of resistance against differential cryptanalysis
Yu Morishima and Masahiro Kaminaga
Provable security based on a robust mathematical framework is the gold standard for security evaluation in cryptography. Several provable secure cryptosystems have been studied for public key cryptography. However, provably secure symmetric-key cryptography has received little attention. Although there are known provably secure symmetric-key cryptosystems based on the hardness of factorization and discrete logarithm problems, they are not only slower than conventional block ciphers but can also be broken by quantum computers. Our study aims to tackle this latter problem by proposing a new provably secure Feistel cipher using collision resistant hash functions based on a Short Integer Solution problem (SIS). Even if cipher primitives are resistant to quantum algorithms, it is crucial to determine whether the cipher is resilient to differential cryptanalysis, a fundamental and powerful attack against symmetric-key cryptosystems. In this paper, we demonstrate that the proposed cipher family is secure against differential cryptanalysis by deriving an upper bound on the maximum differential probability. In addition, we demonstrate the potential success of differential cryptanalysis for short block sizes and statistically evaluate the average resistance of cipher instances based on differential characteristic probabilities. This method approximates the S-box output using a folded two-dimensional normal distribution and employs a generalized extreme value distribution. This evaluation method is first introduced in this paper and serves as the basis for studying the differential characteristics of lattice matrices and the number of secure rounds. This study is foundational research on differential cryptanalysis against block ciphers using a lattice matrix based on SIS.
Last updated:  2024-05-18
Extractable Witness Encryption for Signed Vector Digests from Pairings and Trust-Scalable One-Time Programs
Sora Suegami
Witness encryption (WE) allows a ciphertext to be encrypted under an NP problem such that anyone holding a valid witness for that problem can decrypt it (flexible decryptors), without interaction with others (non-interaction). However, existing schemes are either impractical or achieve only a part of these WE features. We propose a novel WE scheme that 1) is based on bilinear maps such as pairings, 2) achieves the property of flexible decryptors, and 3) still requires the decryptor's communication with a trusted signer, who only performs a fixed amount of computation and communication at regular intervals, regardless of the number of ciphertexts. It provides extractable security and can be extended to a threshold multiple signers setting, avoiding reliance on a single signer. As a significant application of our WE scheme, we build a novel one-time program (OTP) scheme in which the signers' computational and communication costs remain constant, independent of the number of OTPs to be evaluated simultaneously. This feature ensures scalable OTP evaluations without risking decreased signer participation or compromised decentralization due to increased operational costs for the signers.
Last updated:  2024-05-18
Lattice-based Broadcast Authenticated Searchable Encryption for Cloud Storage
Yibo Cao, Shiyuan Xu, Xiu-Bo Chen, Gang Xu, and Siu-Ming Yiu
The extensive use of cloud storage has created an urgent need to search and share data. Public key authenticated encryption with keyword search (PAEKS) allows for the retrieval from encrypted data, while resisting the insider keyword guessing attacks (IKGAs). Most PAEKS schemes only work with single-receiver model, exhibiting very limited applicability. To address this concern, there have been researches on broadcast authenticated encryption with keyword search (BAEKS) to achieve multi-receiver ciphertext search. But to our best knowledge, existing BAEKS schemes are susceptible to quantum computing attacks. In this paper, we propose lattice-based BAEKS, the first post-quantum broadcast authenticated encryption with keyword search, providing robust quantum-safety in multi-receiver model. Specifically, we leverage several lattice sampling algorithms and rejection sampling technique to construct our BAEKS scheme. Furthermore, we incorporate minimal cover set technique and lattice basis extension algorithm to construct an enhanced version, namely FS-BAEKS. Moreover, we give a rigorous security analysis of our scheme. Ultimately, the best computational overhead of BAEKS and Test algorithms in our BAEKS scheme delivers up to approximately 12-x and 402-x faster over prior arts when the number of receivers is six, respectively, which is practical for cloud storage systems.
Last updated:  2024-05-17
SQIsign2D-West: The Fast, the Small, and the Safer
Andrea Basso, Luca De Feo, Pierrick Dartois, Antonin Leroux, Luciano Maino, Giacomo Pope, Damien Robert, and Benjamin Wesolowski
We introduce SQIsign2D-West, a variant of SQIsign using two-dimensional isogeny representations. SQIsignHD was the first variant of SQIsign to use higher dimensional isogeny representations. Its eight-dimensional variant is geared towards provable security but is deemed unpractical. Its four-dimensional variant is geared towards efficiency and has significantly faster signing times than SQIsign, but slower verification owing to the complexity of the four-dimensional representation. Its authors commented on the apparent difficulty of getting any improvement over SQIsign by using two-dimensional representations. In this work, we introduce new algorithmic tools that make two-dimensional representations a viable alternative. These lead to a signature scheme with sizes comparable to SQIsignHD, slightly slower signing than SQIsignHD but still much faster than SQIsign, and the fastest verification of any known variant of SQIsign. We achieve this without compromising on the security proof: the assumptions behind SQIsign2D-West are similar to those of the eight-dimensional variant of SQIsignHD. Additionally, like SQIsignHD, SQIsign2D-West favourably scales to high levels of security Concretely, for NIST level I we achieve signing times of 80 ms and verifying times of 4.5 ms, using optimised arithmetic based on intrinsics available to the Ice Lake architecture. For NIST level V, we achieve 470 ms for signing and 31 ms for verifying.
Last updated:  2024-05-17
Enhancing Watermarked Language Models to Identify Users
Aloni Cohen, Alexander Hoover, and Gabe Schoenbach
A zero-bit watermarked language model produces text that is indistinguishable from that of the underlying model, but which can be detected as machine-generated using a secret key. Unfortunately, merely detecting AI-generated spam, say, as watermarked may not prevent future abuses. If we could additionally trace the text to a spammer's API token or account, we could then cut off their access or pursue legal action. We introduce multi-user watermarks, which allow tracing model-generated text to individual users or to groups of colluding users. We construct multi-user watermarking schemes from undetectable zero-bit watermarking schemes. Importantly, our schemes provide both zero-bit and multi-user assurances at the same time: detecting shorter snippets just as well as the original scheme, and tracing longer excerpts to individuals. Along the way, we give a generic construction of a watermarking scheme that embeds long messages into generated text. Ours are the first black-box reductions between watermarking schemes for language models. A major challenge for black-box reductions is the lack of a unified abstraction for robustness — that marked text is detectable even after edits. Existing works give incomparable robustness guarantees, based on bespoke requirements on the language model's outputs and the users' edits. We introduce a new abstraction to overcome this challenge, called AEB-robustness. AEB-robustness provides that the watermark is detectable whenever the edited text "approximates enough blocks" of model-generated output. Specifying the robustness condition amounts to defining approximates, enough, and blocks. Using our new abstraction, we relate the robustness properties of our message-embedding and multi-user schemes to that of the underlying zero-bit scheme, in a black-box way. Whereas prior works only guarantee robustness for a single text generated in response to a single prompt, our schemes are robust against adaptive prompting, a stronger and more natural adversarial model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.