Hard math problems, complexity and assumptions

Contents

Hard problems and complexity

Hidden subgroup problem

The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems like factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's quantum algorithm for factoring is essentially equivalent to the hidden subgroup problem for finite Abelian groups, while the other problems correspond to finite groups that are not Abelian.

Hidden Field Equations problem

HFE is based on polynomials over finite fields Fq of different size to disguise the relationship between the private key and public key. HFE is a family which consists of basic HFE and combinatorial versions of HFE. The HFE family of cryptosystems is based on the hardness of the problem of finding solutions to a system of multivariate quadratic equations (the so-called MQ problem) since it uses private affine transformations to hide the extension field and the private polynomials.

Hidden Field Equations have been used to construct:

Factorisation

Basics of factoring

RSA assumption

Basic RSA assumption.

Strong RSA assumption

The Strong-RSA assumption was introduced by Baric and Pfitzman

Descriptions

The strong RSA assumption states that the RSA problem is intractable even when the solver is allowed to choose the public exponent e (for e ≥ 3). More specifically, given a modulus N of unknown factorization, and a ciphertext C, it is infeasible to find any pair (M, e) such that C ≡ Me mod N.

The strong RSA assumption was first used for constructing signature schemes provably secure against existential forgery without resorting to the random oracle model. The strong RSA assumption is mentioned in 'Boneh, Boyen Short Signatures Without Random Oracles and the SDH Assumption in Bilinear Groups, 2014'.

Here it is phrased as 'It states that, given an RSA modulus N and s ∈ ZN, it is difficult to construct a non-trivial pair (c, s1/c) where c ∈ Z. Roughly speaking, what makes Strong RSA useful for constructing secure signature schemes is the following property: given a problem instance (N, s) it is possible to construct a new instance (N, s 0 ) with some number q of known solutions (ci, (s')1/ci), such that learning any additional solution (c, (s')1/c) makes it possible to solve the original problem instance. This property provides a way to prove security against an adaptive chosen message attack.'

Finding the order of a group element

Kaye, Laflamme and Mosca (An Introduction to Quantum Computing, 2007) state the following. Miller showed in 1975/76 how the integer factorisation problem reduces probabilistically to the problem finding the order of a group element. This means if we have an efficient order finding algorithm, it is possible to have an efficient factoring algorithm (which will use the order finding algorithm as subroutine). See J.C.P. Miller 'On factorisation with a suggested new approach' Mathematics of Computation 29(12):155-172, 1975.

Discrete Logarithm Problem (DLP)

In any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that bk = a.

DLP applied to signatures

Often the group GF(p) with p a prime is used. Arithmetic is performed mod(p) in such a group. This is sometimes referred to as the 'regular' or 'basic' DLP.

There are many ways to base an electronic signature scheme on the DLP. These include:

DLP in Elliptic Curve groups

Koblitz and Miller presumed that computing logarithms in the elliptic curve group was intractable. Menezes and Vanstone ('Reducing EC logarithms to logarithms in a finite field') showed how to reduce the EC logarithm problem in a curve over Fq to the discrete logarithm problem in an extension field Fqk, using the Weil pairing. Since the index-calculus method allows computing logarithms in a finite field in sub-exponential time, this reduction is useful for computing EC logarithms provided k is small. Which was the case for some recommended curves in 1991.

Which begs the question how to compute logarithms in finite fields... and how doable is this?

DL equivalence proofs

Chaum and Pedersen showed in 1992 that it is possible to proof logX(Y) = logP(Q).

Diffie-Hellman

The Diffie–Hellman problem is related to the DLP. Recall that the DLP is given g, gx, compute x.

The DH problem is stated informally as follows: given an element g and the values of gx and gy, what is the value of gxy?

This is used a.o. in the DH key exchange protocol. Participants agree on a group G, a generator g and a prime p for modular calculations. Participant A selects a as secret exponent, and B selects b. They publicly exchange ga and gb. An eavesdropper can obtain these values, but cannot determine a or b. The eavesdropper can easily obtain ga+b by multiplication, but not gab. A and B can raise what they received from each other to the power of their secret exponent, which in both cases yields the same gab. This can be used as a shared secret.

Remember 32 = 9, 33 = 27, 32 * 33 = 243 = 32+3, not 32*3.

Decisional Diffie-Hellman (DDH)

Further

Computational Diffie-Hellman (DDH)

Static Diffie-Hellman (DDH)

Bilinear Diffie-Hellman, Decision Bilinear Diffie-Hellman and DLIN

The Bilinear Diffie-Hellman problem was apparently introduced by Boneh and Franklin in 'IBE from the Weil pairing' (2001). It is stated as: given g, gx, gy, gz compute gxyz.

The Decisional Bilinear Diffie-Hellman problem is stated as: given g, gx, gy, gz, e(g,g)w, determine if w = xyz.

The Decisional Bilinear Diffie-Hellman (BDH) problem is used by Brent Waters in Efficient Identity-Based Encryption Without Random Oracles that is secure without random oracles.

Decision Linear assumption

The Decision Linear (DLIN) assumption is a computational hardness assumption used in ECC. It is useful in settings where the decisional DH assumption does not hold (as is often the case in pairing-based cryptography). The DLIN assumption was introduced by Boneh, Boyen, and Shacham in 'Short Group Signatures' (2004).

Informally the DLIN assumption states that given (u, v, h, ux, vy) with u,v,h random group elements and x,y random exponents, it is hard to distinguish hx+y from an independent random group element n.

Hash Diffie-Hellman

External and Gap Diffie-Hellman

q-Strong Diffie-Hellman

Introduced in 'Boneh, Boyen Short Signatures Without Random Oracles, 2004'. Analogy to Strong RSA Assumption.

For some parameter q, the SDH assumption in a group G of prime order p states that the following problem is intractable: Given g, gx, gx2, ..., gxq ∈ G as input, output a pair (c, g1/(x+c)) where c ∈ Zp .

General Diffie-Hellman

Verheul, in 'Practical and backwards friendly privacy revocation in German e-ID, Idemix and U-Prove' states that although not known, it seems very unlikely that the hardness of the DDH problem is dependent of the generator of the group. To this end we say that one can solve the general DDH with respect to the group G if if one can solve it for all generators of the group.

Knowledge (extractability) assumptions

Used in SNARKS. See e.g. Bitansky's 'The hunting of the SNARK'.

Knowledge (or extractability) assumptions capture the belief that certain computational tasks can be achieved efficiently only by going through specific intermediate stages and thereby obtaining, along the way, some specific intermediate values. Such an assumption asserts that, for any efficient algorithm that achieves the task, there exists a knowledge extractor algorithm that efficiently recovers the said intermediate values.

Extractable functions are functions where any adversary that outputs a point in the range of the function is guaranteed to “know” a corresponding preimage. Here, knowledge is captured by the existence of an efficient extractor that recovers the preimage from the internal state of the adversary. Extractability of functions was defined by Canetti (ICALP’08) in the context of perfectly one-way functions. It can be regarded as an abstraction from specific knowledge assumptions, such as the Knowledge of Exponent assumption (Hada and Tanaka, Crypto 1998).

Lattices

It is e.g. the basis of the lattice-based public-key encryption scheme, NTRU.

Learning with errors (LWE)

Based on the idea of representing secret information as a set of equations with errors. LWE is a way to hide the value of a secret by introducing noise to it. It refers to the computational problem of inferring a linear n-ary function f over a finite ring from given samples yi = f(xi) some of which may be erroneous.

The LWE problem was introduced by Oded Regev in 2005, in “On lattices, learning with errors, random linear codes, and cryptography,”. For this he won the 2018 Gödel Prize. It is a generalization of the parity learning problem. He showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems,[3][4] such as the ring learning with errors key exchange by Peikert.