Suppose that we are given a system of modular equations

Trivially, this can be solved for unknown and using a simple Gaussian elimination step, i.e., writing the equations as

.

This is perfectly fine as long as the equations share a common modulus , but what about when the equations share unknowns but are defined under different moduli? Let us take a real-world scenario from the realm of cryptography.

**Example: DSA using linear congruential generator (LCG)**

The DSA (Digital Signature Algorithm) has two functions and . To sign a message (invoking ), the following steps are performed:

- Let be a hash function and the message to be signed:
- Generate a (random) ephemeral value where .
- Compute . If , go to step 2.
- Compute . If , go to step 2.
- The signature is .

To verify as signature (invoking ), the below steps are performed:

- If or is not satisfied, reject the signature.
- Compute .
- Compute .
- Compute .
- Compute .
- If , accept. Otherwise, reject.

In the case of using *LCG* as a psuedorandom-number generator for the values , two consecutively generated values (we assume one signature was generated right after the other) will be correlated as for some public parameters . Assuming that , we obtain equations

from the fourth step of .

**Chinese Remainder Theorem (CRT)**

A well-known theorem in number theory is the Chinese Remainder Theorem (commonly referred to with the acronym CRT), which deals with simple equations over different and pairwise-prime moduli. For instance,

which has the solution . In solving actual multivariate equations, we will hit a brick wall, as needed operations such as row reduction and modular inversion does not work.

The following code takes any moduli, which do not need to be relatively prime, and computes the CRT:

def crt_lll(m, n): p = n[0] q = n[1] a = m[0] % p b = m[1] % q L = p*q # can be tweaked A = matrix([ [1, 1, 1/L, 0], [p, 0, 0, 0], [0, q, 0, 0], [0, 0, p*q/L, 0], [-a, -b, 0, L] ]) B = A.LLL() assert(B[-1, 0] == 0 and B[-1, 1] == 0) return B[-1, 2] * L print crt_lll( [777 % 234, 777 % 567], [234, 567] )

**Thue’s lemma**

Thue’s lemma states that given a modulus and an integer , there exists two integers such that

.

We can solve this with lattice reduction as follows:

def thues_lemma(a, mod): ''' Find small elements x, y such that a * x = y and |x| < sqrt(mod) and |y| < sqrt(mod). ''' R = Zmod(mod) A = matrix([ [1, a], [0, mod] ]) A = A.LLL() return A[0][0], A[0][1] # test p = random_prime(2^512) q = random_prime(2^512) n = p * q a = randint(0, 2**1024) u, v = thues_lemma(a, n) assert(a * u % n == v % n)

**Lattice reduction**

A *lattice* is a discrete linear space spanned by a set of basis vectors. For instance, and are two basis vectors that spans the space below. They are not unique in that sense, but evidently, they are the shortest basis vectors possible (such a basis can be found using the LLL algorithm). If the two column vectors and are basis vectors, then the corresponding lattice is denoted . Here, .

The problem of finding the *shortest vector* is called . Sloppily formulated: for a given lattice , find the shortest (in terms of Euclidan norm) non-zero vector. The answer to that question would be . Starting with different basis vectors, the problem will show be trickier.

A related problem is to find the *closest vector*, which is commonly called . In this scenario, we are given a vector in the vector space (but it might not be in ) and we want to find the closest vector in . There is a simple reduction from to by setting . There is also a reduction in the other direction, which involves extending the basis of to also include . This is called *embedding.*

Let us return to the example of DSA and different-moduli equations. The equations we got from before

can be formulated as basis vectors. In matrix form, we have

or (by embedding technique):

The idea is now to force the values corresponding to the guessed values to be close in the normed space. By adding additional constraints we may perform the following minimization:

where , and . Finding a closest approximation (preferably, using LLL/Babai) yields a solution to the DSA equations using LCG.

**Example: Knapsack**

The *knapsack problem* (sometimes *subset-sum problem*) is stated as follows. Given a weight and items/weights , find the sequence such that

It is not too hard to prove that this is NP-complete, but we omit the reduction here.

In a cryptographic setting, this can be used to encode data in the sequence . This is called the Merkle-Hellman public-key cryptosystem. It is easy to see that encryption actually is the encoding procedure mentioned above. However, the decryption procedure is a bit more involved; in fact it requires a special class of instances. If the weights can be transformed into a super-increasing sequence, the problem becomes trivial to retrieve the sequence .

Think of it this way. Assume that all weights sum to something smaller than the weight. Then, if the target sum (the ciphertext) is larger than (if not, must be smaller assuming there is a unique solution), we know that . We can now remove and from the equation by solving for the remaining weights and a recalculated . This procedure can be repeated until all weights have been found (or ).

Merkle-Hellman provided a way to transform the super-increasing sequence into a hard one. We omit the details here, but the procedure can be found all over internet.

It turns out we can use lattice reduction for this problem. We create a basis matrix in the following way

and perform lattice reduction on it. Since the LLL/BKZ algorithm achieves a set of short basis, it will try to find something that makes the entries of the left-most column small. What this actually means is that it will find a sum of the rows that achieves a solution to the knapsack problem.

Of course, the solution must only contain values in . Depending on the instance, this may or may not be the case. So, how do we penalize the algorithm for choosing values outside the allow set?

*A new approach**

Let us create a new basis matrix in the following manner:

The algorithm performs the following steps:

- Randomly (or deterministically) pick a guess on the number of non-zero values in the solution.
- Update the matrix run LLL/BKZ on .
- If a satisfiable reduced-basis vector is found, return it. Otherwise, goto 1.

It does not guarantee that an incorrect solution is penalized, but it increases the probability of it (reduces the set of ‘false’ basis vectors). We omit a formal proof, but think of it this way: Assume that is false solution and a reduced-basis vector of . In it also has to sum to the number of non-zero weights in the correct solution. Assume all false vectors appear randomly (they do not, but let us assume it!). Then for a correct guess of , the probability of the vector being is . If , this is , which is a noticeable reduction for most values of .

Here is an example of a CTF challenge which could be solved with the above technique.

* *I do not know if this should attributed to someone.*

**Example: Ring-LWE**

TBA

How to setup variables x’, k1′,k2′ in DSA-LCG? Could you attach example code?

Usually, they are set up with . See e.g. https://cseweb.ucsd.edu/~mihir/papers/dss-lcg.pdf