0ctf’17 – All crypto tasks

Integrity

Just a simple scheme.
nc 202.120.7.217 8221

The encrypt function in the scheme takes the username as input. It hashes the username with MD5, appends the name to the hash and encrypts with a secret key k, i.e. \textsf{Enc}_k(H_\text{MD5}(\text{username}) \|\text{username}). Then, the secret becomes \text{IV} \| C_1 \| C_2 .... Notably, the first block C_1 contains the hash, but encrypted.

Recall how the decryption is defined:

C_i = \textsf{Dec}_k(C_i) \oplus C_{i-1}, C_0 = \text{IV}

So, what would happen if we input u = H_\text{MD5}(\texttt{admin}) \| \texttt{admin} as username? Then, we have encrypted H_\text{MD5}(u) \| u, but we want only u. As mentioned before, the hash fits perfectly into a single block. So, by removing the \text{IV}C_1 becomes the new \text{IV} (which has no visible effect on the plaintext anymore!). Then, we have \textsf{Enc}_k(H_\text{MD5}(\texttt{admin}) \| \texttt{admin}), which is all what we need.

The flag is flag{Easy_br0ken_scheme_cann0t_keep_y0ur_integrity}.

OTP1

I swear that the safest cryptosystem is used to encrypt the secret!

We start off analyzing the code. Seeing process(m, k) function, we note that this is actually something performed in \mathbb F_2[x]/P(x) with the mapping that, for instance, an integer 11 is \texttt{1011} in binary, which corresponds to a polynomial x^3 + x + 1. The code is doing (m \oplus k)^2 in GF(2^{256}).

The keygen function repeatedly calls key = process(key, seed). The first value for key is random, but the remaining ones does not. seed remains the same. Define K to be the key and S the seed. Note that all elements are in GF(2^{256}). The first stream value Z_0 is unknown.

Z_0 = K
Z_1 = (Z_0 \oplus S)^2 = K^2 \oplus S^2
Z_2 = (Z_1 \oplus S)^2 = Z_1^2 \oplus S^2

So, we can compute the seed and key as S^2 = Z_2 \oplus Z_1^2 and K^2 = Z_1 \oplus S^2 = Z_1 \oplus Z_2 \oplus Z_1^2. The individual square roots exist and are unique.

def num2poly(num):
    poly = R(0)
    for i, v in enumerate(bin(num)[2:][::-1]):
        if (int(v)):
            poly += x ** i
    return poly

def poly2num(poly):
    bin = ''.join([str(i) for i in poly.list()])
    return int(bin[::-1], 2)

def gf2num(ele):
    return ele.polynomial().change_ring(ZZ)(2)

P = 0x10000000000000000000000000000000000000000000000000000000000000425L

fake_secret1 = "I_am_not_a_secret_so_you_know_me"
fake_secret2 = "feeddeadbeefcafefeeddeadbeefcafe"
secret = str2num(urandom(32))

R = PolynomialRing(GF(2), 'x')
x = R.gen()
GF2f = GF(2**256, name='a', modulus=num2poly(P))

f = open('ciphertext', 'r')
A = GF2f(num2poly(int(f.readline(), 16)))
B = GF2f(num2poly(int(f.readline(), 16)))
C = GF2f(num2poly(int(f.readline(), 16)))

b = GF2f(num2poly(str2num(fake_secret1)))
c = GF2f(num2poly(str2num(fake_secret2)))

# Retrieve partial key stream using known plaintexts
Y = B + b
Z = C + c

Q = (Z + Y**2)
K = (Y + Q).sqrt()

print 'flag{%s}' % hex(gf2num(A + K)).decode('hex')

This gives the flag flag{t0_B3_r4ndoM_en0Ugh_1s_nec3s5arY}.

OTP2

Well, maybe the previous one is too simple. So I designed the ultimate one to protect the top secret!

There are some key insights:

  • The process1(m, k) function is basically the same as in previous challenge, but it computes the multiplication m \otimes k with the exception that elements are in GF(2^{128}) this time.  We omitt the multiplication symbol from now on.
  • The process2(m, k) function might look involved, but all that it does is to compute the matrix multplication between two 2 \times 2 matrices (with elements in GF(2^{128})), i.e., XY = \begin{pmatrix}x_0 & x_1 \\ x_2 & x_3\end{pmatrix}\begin{pmatrix}y_0 & y_1 \\ y_2 & y_3\end{pmatrix}
  • We start with matrices X = \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix} and Y = \begin{pmatrix}A & B \\ 0 & 1\end{pmatrix}.
  • Raising A to a power yields has a closed form formula: A^s = \begin{pmatrix}A^s & B(A^{s-1} \oplus A \oplus 1 )\\ 0 & 1\end{pmatrix} =\begin{pmatrix}A^s & B\frac{A^{s} \oplus 1 }{A \oplus 1}\\ 0 & 1\end{pmatrix}.
  • The nextrand(rand) function takes the integral value of N, we call this s and computes Y^s = \begin{pmatrix}A & B \\ 0 & 1\end{pmatrix}^s via a square-and-multiply type algorithm. In python, it would be
    def proc2(key):
        AN = A**gf2num(N)
        return key*AN+(AN+1)/(A+1)*B

Let us look at the nextrand(rand) function a little more. Let R be the random value fed to the function. Once Y^s is computed, it returns

Q = R A^s \oplus  \frac{A^s \oplus 1}{A\oplus 1} B

Define U = \frac{B}{A\oplus 1}. Adding this to the above yields

Q\oplus  U = R A^s \oplus  \frac{A^s \oplus 1 \oplus 1}{A\oplus 1} B = A^s(R \oplus \frac{B}{A\oplus 1}) = A^s(R\oplus U).

So, A^s = \frac{Q\oplus U}{R \oplus U}. Note that given two elements of the key stream, all these elements are known. Once determined, we compute the (dicrete) \log \frac{Q\oplus U}{R\oplus U} to find s. And once we have s, we also have N. Then, all secrets have been revealed!

From the plaintext, we can immediately get the key K by XORing the first part of the plaintext with the corresponding part of the ciphertext. This gives K =\texttt{0x2fe7878d67cdbb206a58dc100ad980ef}.


R = PolynomialRing(GF(2), 'x')
x = R.gen()

GF2f = GF(2**128, name='a', modulus=num2poly(0x100000000000000000000000000000087))

A = GF2f(num2poly(0xc6a5777f4dc639d7d1a50d6521e79bfd))
B = GF2f(num2poly(0x2e18716441db24baf79ff92393735345))
G1 = GF2f(num2poly(G[1]))
G0 = GF2f(num2poly(0x2fe7878d67cdbb206a58dc100ad980ef))

U = B/(A+1)
Z = (G1+U)/(G0+U)
N = discrete_log(Z, A, K.order()-1)

We can then run the encryption (the default code) with the parameters N,K fixed to obtain the flag flag{LCG1sN3ver5aFe!!}.

Boston Key Party 2017 – Sponge

In this challenge, we are given a hash function, which essentially splits the input into chunks of ten bytes. It then appends six bytes of null data to it and XORs with the current state. The state is encrypted with the all-zero key and updated. This process is repeated until all blocks have been exhausted.

A simple case is to consider what happens when we hash ten bytes of null data.

localcollision.png
\begin{array}{lcll} \textbf{State}_{i} & & \textbf{State}_{i+1} & \textbf{Input}\\ \texttt{00000000000000000000~000000000000}&\rightarrow&\texttt{66e94bd4ef8a2c3b884c~fa59ca342b2e}&\texttt{00000000000000000000}\\ \texttt{e6e94bd4ef8a2c3b884d~fa59ca342b2e} &\rightarrow& \texttt{cccb674e90ee226bea81~557bff1e7123} & \texttt{80000000000000000001} \end{array}

If we can bring it back to the same state, we have a local collision. Here is an example of such a collision:

\begin{array}{lcll} \textbf{State}_{i} & & \textbf{State}_{i+1} & \textbf{Input}\\ \texttt{00000000000000000000~000000000000}&\rightarrow&\texttt{66e94bd4ef8a2c3b884c~fa59ca342b2e}&\texttt{00000000000000000000}\\ \texttt{210e7f3dc8b624d41038 fa59ca342b2e} &\rightarrow& \texttt{26e1e20ccd3ce8e975a6 33c3d2824408} &\texttt{47e734e9273c08ef9874}\\ \texttt{0f809b9375310b0656cf 33c3d2824408}& & \texttt{d979b8a852674734c855 000000000000} & \texttt{2961799fb80de3ef2369}\\ \texttt{00000000000000000000 000000000000} & \rightarrow &\texttt{66e94bd4ef8a2c3b884c~fa59ca342b2e}&\texttt{d979b8a852674734c855}\\ \texttt{e6e94bd4ef8a2c3b884d~fa59ca342b2e} &\rightarrow& \texttt{cccb674e90ee226bea81~557bff1e7123} & \texttt{80000000000000000001} \end{array}

So, how did we create the above collision? Well, actually, it is not too complicated… first, note that we cannot control the last six bytes. Recall that \textsf{Enc}(\textsf{Enc}(x \oplus a \| \texttt{0x00}^6) \oplus b \| \texttt{0x00}^6) \oplus c \| \texttt{0x00}^6 = y. Let us reorder the function as follows:

\textsf{Enc}(x \oplus a \| \texttt{0x00}^6)  = \textsf{Dec}(y \oplus c \| \texttt{0x00}^6) \oplus b \| \texttt{0x00}^6

If we force the trailing six bytes to null and then decrypt that block for different values on y \oplus c \| \texttt{0x00}^6, since we can control c. Equivalently, we can encrypt for different values of x \oplus a \| \texttt{0x00}^6, where the trailing six bits will be the trailing six bits of \textsf{Enc}(\texttt{0x00}^{16}).

Utilizing the birthday paradox, we can find a collision in the six bytes in \sim \sqrt{256^6} time and space. This is done by putting about 2^{24} values of the encryption (or decryption) in a table. Then, we generate the decryptions (or encryptions, respectively) and look in the table.

lookup

In Python, it could look something like:

trailing_bytes_first = AES.new('\x00'*16).encrypt('\x00'*16)[-6:]

for i in range(0, 2**24):
    plaintext = os.urandom(10) + trailing_bytes_first
    data[AES.new('\x00'*16).encrypt(plaintext)[-6:]] = plaintext

for i in range(0, 2**24):
    ciphertext = os.urandom(10) + '\x00\x00\x00\x00\x00\x00'
    if AES.new('\x00'*16).decrypt(ciphertext)[-6:] in data:
        print [data[AES.new('\x00'*16).decrypt(ciphertext)[-6:]]], [ciphertext]

Two such values are

local_collision_blocks = ['!\x0e\x7f=\xc8\xb6$\xd4\x108\xfaY\xca4+.', '\xd9y\xb8\xa8RgG4\xc8U\x00\x00\x00\x00\x00\x00']

Finally, we generate a collision as follows

GIVEN = 'I love using sponges for crypto'
A = AES.new('\x00'*16).encrypt(local_collision_blocks[0])
B = AES.new('\x00'*16).decrypt(local_collision_blocks[1])
local_collision = '\x00'*10 + xorstring(local_collision_blocks[0][:10], AES.new('\x00'*16).encrypt('\x00'*16)) + xorstring(A,B)[:10] + xorstring(local_collision_blocks[1][:10], GIVEN[:10])

BSides’17 – Delphi

In this challenge, we are given a server which accepts encrypted commands and returns the resulting output. First we define our oracle go(cmd).

import urllib2

def go(cmd):
	return urllib2.urlopen('http://delphi-status-e606c556.ctf.bsidessf.net/execute/' + cmd).read()

This simply return the status from the server. It is common for this kind of CTF challenges to use some block-cipher variant such as some of the AES modes.

The first guess I had was that AES-CBC was being used. That would mean that if we try to flip some bit in a block somewhere in the middle of the ciphertext, the first decrypted part would remain intact, whilst the trailing blocks would get scrambled.

Assume that we have four ciphertext blocks C_0, C_1, C_2, C_3 and the decryption is \textsf{dec}_k : C_0\| C_1\| C_2\| C_3 \mapsto P_0\|P_1\|P_2\|P_3. Now, we flip a bit in C_1 so that we get C_1', then we have \textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P'_3. (This is not true, thanks to hellman for pointing that out in the comments).

Turns out this is not the case. In fact, the error did only propagate one block and not further, i.e.,\textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P_3. Having a look at the Wikipedia page, I found that this is how AES-CFB/(CBC) would behave (image from Wikipedia):

1202px-cfb_decryption-svg

601px-cbc_decryption-svg

Since \textsf{dec}_k(C_0) \oplus C_1 = P_1, we can inject some data into the decrypted ciphertext! Assume that we want P'_1 = Q. Then, we can set C'_1 = C_1 \oplus P_1 \oplus Q, since then \textsf{dec}_k(C_0) \oplus C'_1 = P_1\oplus P_1 \oplus Q = Q. Embodying the above in Python, we might get something like

def xor(a, b):
    return ''.join(chr(ord(x) ^ ord(y))
              for x, y in zip(a, b))

cmd = '8d40ab447609a876f9226ba5983275d1ad1b46575784725dc65216d1739776fdf8ac97a8d0de4b7dd17ee4a33f85e71d5065a02296783e6644d44208237de9175abed53a8d4dc4b5377ffa268ea1e9af5f1eca7bb9bfd93c799184c3e0546b3ad5e900e5045b729de2301d66c3c69327'
response = ' to test multiple-block patterns' # the block we attack

split_blocks = [cmd[i * 32: i * 32 + 32]
                for i in range(len(cmd) / 32)]

block = 3 # this is somewhat arbitrary

# get command and pad it with blank space
append_cmd = '  some command'
append_cmd = append_cmd + '\x20' * (16 - len(append_cmd))

new_block = xor(split_blocks[block].decode("hex"),
                response).encode('hex')
new_block = xor(new_block.decode("hex"),
                append_cmd).encode('hex')

split_blocks[block] = new_block
cmd = ''.join(split_blocks)
#print cmd
print go(cmd)

We can verify that this works. Running the server, we get

This is a longer string th\x8a\r\xe4\xd9.\n\xde\x86\xb6\xbd*\xde\xf8X\x15I  some command  e-block patterns\n

OK, so the server accepts it. Nice. Can we exploit this? Obviously — yes. We can guess that the server does something like

echo "{input string}";

First, we break off the echo statement. Then we try to cat the flag and comment out the rest. We can do this in one block! Here is how:

append_cmd = '\"; cat f* #'

Then, the server code becomes

echo "{partial + garbage}"; cat f* #{more string}";

The server gives the following response:

This is a longer string th:\xd7\xb1\xe8\xc2Q\xd7\xe8*\x02\xe8\xe8\x9c\xa6\xf71\n
FLAG:a1cf81c5e0872a7e0a4aec2e8e9f74c3\n

Indeed, this is the flag. So, we are done!

Some icons for macOS Sierra.

Today, I drew some replacement icons for some apps I use daily (i.e., Terminal, Wireshark, Hex Fiend and Hopper) since I did not like default ones. I release them under the GPL v.3.0 License, so feel free to use them in your OS 🙂

 

In action (with the dark grey and light grey terminal icons, respectively):

Here are some additional icons:

Covert communication over a noisy channel

Consider a situation in which one part Alice wants to communicate with another part Bob over a discrete memoryless channel with crossover probability p_1  while ensuring a low level of detection in the presence of a warden Willie, who observes the communication through another discrete memoryless channel with crossover probability p_2. We assume that all parts are equally computationally bounded.

Skärmavbild 2016-08-23 kl. 22.44.47

In this situation, we require that p_1 < p_2 for non-zero p_1,p_2. Let \mathbf{u} \in \mathbb{F}_2^l be a sequence of secret bits she wants to transmit.

NOTE: Alice and Bob may on forehand agree upon any strategy they desire, but the strategy is known also to Willie.

Communicating with a secret key

If Alice and Bob share a secret key k, they may use it to pick a common encoding, i.e., an error-correcting code. This encoding in turn is used to encode the message, which of course must be able to correct the errors produced by the channel. Assuming that the correction capability is below the error rate of Willie’s channel, he cannot decode. Let N bits be the length of the publicly transmitted sequence. An established result states that if \sqrt N \cdot \log N bits are shared between Alice and Bob, they can transmit \sqrt N secret bits without Willie seeing them (with high probability).

Communicating without a secret key

Now, assume that Alice and Bob do not share a common key. Alice performs the following steps:

  1. Picks a cryptographically secure random vector \mathbf r.
  2. Computes the scalar product \mathbf r \cdot \mathbf u = k.
  3. Sends the bits \mathbf r \| k over the channel.

This reduces to the problem of \textsc{Learning Parity with Noise} (LPN) and can be solved with the BKW algorithm (or similar) if p_1 is sufficiently low. In particular, if Bob receives

N = 4 \cdot \log 2 \cdot l \cdot \epsilon_1^{-2 (l+1)}

such sequences, or equivalently,

N \cdot l = 4 \cdot \log 2 \cdot l^2 \cdot \epsilon_1^{-2 (l+1)}

bits, he will be able to decode with high probability. Here we have exploited the piling-up lemma disregrading the fact that some bits in \mathbf{u} are zero and does not contribute. For some probabilities p_1<p_2 and natural number N, the information is hidden from Willie. The information rate is determined as follows: N = \mathcal{O}(l^2\cdot \epsilon_1^{-2(l+1)}), so

\mathcal{O}(\sqrt N)=\mathcal{O}(l \cdot\epsilon_1^{-l-1})\iff\mathcal{O}(\sqrt N\cdot\epsilon_1^{l+1})=\mathcal{O}(l).

This bound can be improved upon by an increase in the number of parities.

Here is a more detailed paper on a similar topic. Another paper is here.

Solving problems with lattice reduction

Suppose that we are given a system of modular equations

\begin{array}{rcl} a_0 \cdot x + b_0\cdot y & = & c_0 \pmod q \\ a_1 \cdot x + b_1\cdot y & = & c_1 \pmod q \\ & \ldots & \\ a_n\cdot x + b_n\cdot y & = & c_n \pmod q \\ \end{array}

Trivially, this can be solved for unknown x and y using a simple Gaussian elimination step, i.e., writing the equations as

\mathbf{A} \begin{pmatrix}x & y\end{pmatrix}^T = \mathbf{c} \iff \begin{pmatrix}x & y\end{pmatrix}^T = \mathbf{A}^{-1} \mathbf{c}.

This is perfectly fine as long as the equations share a common modulus q, but what about when the equations share unknowns but are defined under different moduli? Let us take a real-world scenario from the realm of cryptography.

Example: DSA using linear congruential generator (LCG)

The DSA (Digital Signature Algorithm) has two functions \textsf{Sign} and \textsf{Verify}. To sign a message (invoking \textsf{Sign}), the following steps are performed:

  1. Let H be a hash function and m the message to be signed:
  2. Generate a (random) ephemeral value k where 0 < k < q.
  3. Compute r=\left(g^{k}\bmod\,p\right)\bmod\,q. If r=0, go to step 2.
  4. Compute s=k^{-1}\left(H\left(m\right)+r \cdot x\right)\bmod\,q. If s=0, go to step 2.
  5. The signature is \left(r,s\right).

To verify as signature (invoking \textsf{Verify}), the below steps are performed:

  1. If 0 < r < q or 0 < s < q is not satisfied, reject the signature.
  2. Compute w = s^{-1} \bmod\,q.
  3. Compute u_1 = H\left(m\right) \cdot w\, \bmod\,q.
  4. Compute u_2 = r \cdot w\, \bmod\,q.
  5. Compute v = \left(g^{u_1}y^{u_2} \bmod\,p\right) \bmod\,q.
  6. If v = r, accept. Otherwise, reject.

In the case of using LCG as a psuedorandom-number generator for the values k, two consecutively generated values (we assume one signature was generated right after the other) will be correlated as a \cdot k_1 + b = k_2 \pmod m for some public parameters a,b,M. Assuming that M \neq q, we obtain equations

\begin{array}{rclc} s_1 \cdot k_1 - r_1\cdot x & = & H(m_1) &\pmod q \\ s_2 \cdot k_2 - r_2\cdot x & = & H(m_1) &\pmod q \\ -a\cdot k_1 + 1\cdot k_2 & = & c & \pmod M \\ \end{array}

from the fourth step of \textsf{Sign}.

Chinese Remainder Theorem (CRT)

A well-known theorem in number theory is the Chinese Remainder Theorem (commonly referred to with the acronym CRT), which deals with simple equations over different and pairwise-prime moduli. For instance,

\begin{array}{rcl} x & = & 2 \pmod 3 \\x & = & 3 \pmod 5 \\ x & = & 2 \pmod 7 \end{array}

which has the solution x = 23. In solving actual multivariate equations, we will hit a brick wall, as needed operations such as row reduction and modular inversion does not work.

Lattice reduction

A lattice is a discrete linear space spanned by a set of basis vectors. For instance, \mathbf b_1 and \mathbf b_2 are two basis vectors that spans the space below. They are not unique in that sense, but evidently, they are the shortest basis vectors possible (such a basis can be found using the LLL algorithm). If the two column vectors \mathbf b_1 and \mathbf b_2 are basis vectors, then the corresponding lattice is denoted \mathcal{L}(\mathbf B). Here, \mathbf B = (\mathbf b_1 ~ \mathbf b_2).

Screen Shot 2016-08-11 at 21.53.23

The problem of finding the shortest vector is called \textsf{SVP}. Sloppily formulated: for a given lattice \mathcal{L}(\mathbf B), find the shortest (in terms of Euclidan norm) non-zero vector. The answer to that question would be \mathbf b_2. Starting with different basis vectors, the problem will show be trickier.

A related problem is to find the closest vector, which is commonly called \textsf{CVP}. In this scenario, we are given a vector \mathbf t in the vector space (but it might not be in \mathcal{L}(\mathbf B)) and we want to find the closest vector in \mathcal{L}(\mathbf B). There is a simple reduction from \textsf{CVP} to \textsf{SVP} by setting \mathbf t = \mathbf 0. There is also a reduction in the other direction, which involves extending the basis of \mathcal{L}(\mathbf B) to also include \mathbf t. This is called embedding.

Let us return to the example of DSA and different-moduli equations. The equations we got from before

\begin{array}{rclc} s_1 \cdot k_1 - r_1\cdot x & = & H(m_1) &\pmod q \\ s_2 \cdot k_2 - r_2\cdot x & = & H(m_1) &\pmod q \\ -a\cdot k_1 + 1\cdot k_2 & = & c & \pmod M \\ \end{array}

can be formulated as basis vectors. In matrix form, we have

\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 \\ -r_2 & 0 & s_2 & 0 & q & 0 \\ 0 & -a & 1 & 0 & 0 & M\end{pmatrix} \cdot \mathbf y = \begin{pmatrix}H(m_1) \\ H(m_2) \\ c\end{pmatrix}

or (by embedding technique):

\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 & H(m_1) \\ -r_2 & 0 & s_2 & 0 & q & 0 & H(m_2)\\ 0 & -a & 1 & 0 & 0 & M & c \end{pmatrix} \cdot \begin{pmatrix}\mathbf y \\ -1 \end{pmatrix}  = \mathbf 0

The idea is now to force the values corresponding to the guessed values x', k_1', k_2' to be close in the normed space. By adding additional constraints we may perform the following minimization:

\min_{x,k_1,k_2}\left\|\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 & H(m_1) \\ -r_2 & 0 & s_2 & 0 & q & 0 & H(m_2)\\ 0 & -a & 1 & 0 & 0 & M & c \\ 1/\gamma_x & 0 & 0 & 0 & 0 & 0 & x'/\gamma_x \\ 0 & 1/\gamma_{k_1}  & 0 & 0 & 0 & 0 & k_1'/\gamma_{k_1} \\ 0 & 0 & 1/\gamma_{k_2} & 0 & 0 & 0  & k_2'/\gamma_{k_2}\end{pmatrix} \cdot \begin{pmatrix}\mathbf y \\ -1 \end{pmatrix}  \right\|

where \gamma_x = \min(x',q-x')\gamma_{k_1} = \min(k_1',m-k_1') and \gamma_{k_2} = \min(k_2',m-k_2'). Finding a closest approximation (preferably, using LLL/Babai) yields a solution to the DSA equations using LCG.

Example: Knapsack

The knapsack problem (sometimes subset-sum problem) is stated as follows. Given a weight t and n items/weights \{w_1,w_2,\dots,w_n\}, find the sequence x_1, x_2, \dots, x_n \in \{0,1\}^n such that

\sum_{i=1}^n w_i \cdot x_i = t.

It is not too hard to prove that this is NP-complete, but we omit the reduction here.

In a cryptographic setting, this can be used to encode data in the sequence x_1, x_2, \dots, x_n. This is called the Merkle-Hellman public-key cryptosystem. It is easy to see that encryption actually is the encoding procedure mentioned above. However, the decryption procedure is a bit more involved; in fact it requires a special class of instances. If the weights can be transformed into a super-increasing sequence, the problem becomes trivial to retrieve the sequence x_1, x_2, \dots, x_n.

Think of it this way. Assume that all weights sum to something smaller than the w_n weight. Then, if the target sum t (the ciphertext) is larger than w_1 + w_2 + \cdots + w_{n-1} (if not, t must be smaller assuming there is a unique solution), we know that x_n=1. We can now remove x_n and w_n from the equation by solving for the remaining weights and a recalculated t' = t - w_n. This procedure can be repeated until all weights have been found (or t = 0).

Merkle-Hellman provided a way to transform the super-increasing sequence into a hard one. We omit the details here, but the procedure can be found all over internet.

It turns out we can use lattice reduction for this problem. We create a basis matrix \mathbf A in the following way

\mathbf A = \begin{pmatrix} 1 & 0 & \cdots & 0 & w_1 \\ 0 & 1 & \cdots & 0 & w_2 \\ \vdots & \vdots & \ddots & 0 & \vdots \\ 0 & 0 & \cdots & 1 & w_n \\0 & 0 & 0 & 0 & -t\end{pmatrix}

and perform lattice reduction on it. Since the LLL/BKZ algorithm achieves a set of short basis, it will try to find something that makes the entries of the left-most column small. What this actually means is that it will find a sum of the rows that achieves a solution to the knapsack problem.

Of course, the solution must only contain values in \{0,1\}. Depending on the instance, this may or may not be the case. So, how do we penalize the algorithm for choosing values outside the allow set?

A new approach*

Let us create a new basis matrix \mathbf A' in the following manner:

\mathbf A'_i = \begin{pmatrix} 1 & 0 & \cdots & 0 & w_1 & \alpha\\ 0 & 1 & \cdots & 0 & w_2 & \alpha\\ \vdots & \vdots & \ddots & 0 & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & w_n & \alpha\\0 & 0 & 0 & 0 & -t & -\alpha \cdot i\end{pmatrix}

The algorithm performs the following steps:

  1. Randomly (or deterministically) pick a guess i on the number of non-zero values in the solution.
  2. Update the matrix run LLL/BKZ on \mathbf A'_i.
  3. If a satisfiable reduced-basis vector is found, return it. Otherwise, goto 1.

It does not guarantee that an incorrect solution is penalized, but it increases the probability of it (reduces the set of ‘false’ basis vectors). We omit a formal proof, but think of it this way: Assume that \mathbf v is false solution and a reduced-basis vector of \mathbf A. In \mathbf A'_i it also has to sum to the number of non-zero weights in the correct solution. Assume all false vectors appear randomly (they do not, but let us assume it!). Then for a correct guess of i, the probability of the vector \mathbf v being is {n \choose i} / 2^n. If i = \epsilon \cdot n, this is 2^{n(H(\epsilon)-1)}, which is a noticeable reduction for most values of i.

Here is an example of a CTF challenge which could be solved with the above technique.

* I do not know if this should attributed to someone.

Example: Ring-LWE

TBA