# BSides’17 – Delphi

In this challenge, we are given a server which accepts encrypted commands and returns the resulting output. First we define our oracle go(cmd).

import urllib2

def go(cmd):


This simply return the status from the server. It is common for this kind of CTF challenges to use some block-cipher variant such as some of the AES modes.

The first guess I had was that AES-CBC was being used. That would mean that if we try to flip some bit in a block somewhere in the middle of the ciphertext, the first decrypted part would remain intact, whilst the trailing blocks would get scrambled.

Assume that we have four ciphertext blocks $C_0, C_1, C_2, C_3$ and the decryption is $\textsf{dec}_k : C_0\| C_1\| C_2\| C_3 \mapsto P_0\|P_1\|P_2\|P_3$. Now, we flip a bit in $C_1$ so that we get $C_1'$, then we have $\textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P'_3$. (This is not true, thanks to hellman for pointing that out in the comments).

Turns out this is not the case. In fact, the error did only propagate one block and not further, i.e.,$\textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P_3$. Having a look at the Wikipedia page, I found that this is how AES-CFB/(CBC) would behave (image from Wikipedia):

Since $\textsf{dec}_k(C_0) \oplus C_1 = P_1$, we can inject some data into the decrypted ciphertext! Assume that we want $P'_1 = Q$. Then, we can set $C'_1 = C_1 \oplus P_1 \oplus Q$, since then $\textsf{dec}_k(C_0) \oplus C'_1 = P_1\oplus P_1 \oplus Q = Q$. Embodying the above in Python, we might get something like

def xor(a, b):
return ''.join(chr(ord(x) ^ ord(y))
for x, y in zip(a, b))

response = ' to test multiple-block patterns' # the block we attack

split_blocks = [cmd[i * 32: i * 32 + 32]
for i in range(len(cmd) / 32)]

block = 3 # this is somewhat arbitrary

# get command and pad it with blank space
append_cmd = '  some command'
append_cmd = append_cmd + '\x20' * (16 - len(append_cmd))

new_block = xor(split_blocks[block].decode("hex"),
response).encode('hex')
new_block = xor(new_block.decode("hex"),
append_cmd).encode('hex')

split_blocks[block] = new_block
cmd = ''.join(split_blocks)
#print cmd
print go(cmd)


We can verify that this works. Running the server, we get

This is a longer string th\x8a\r\xe4\xd9.\n\xde\x86\xb6\xbd*\xde\xf8X\x15I  some command  e-block patterns\n

OK, so the server accepts it. Nice. Can we exploit this? Obviously — yes. We can guess that the server does something like

echo "{input string}";


First, we break off the echo statement. Then we try to cat the flag and comment out the rest. We can do this in one block! Here is how:

append_cmd = '\"; cat f* #'


Then, the server code becomes

echo "{partial + garbage}"; cat f* #{more string}";


The server gives the following response:

This is a longer string th:\xd7\xb1\xe8\xc2Q\xd7\xe8*\x02\xe8\xe8\x9c\xa6\xf71\n
FLAG:a1cf81c5e0872a7e0a4aec2e8e9f74c3\n

Indeed, this is the flag. So, we are done!

# Some icons for macOS Sierra.

Today, I drew some replacement icons for some apps I use daily (i.e., Terminal, Wireshark, Hex Fiend and Hopper) since I did not like default ones. I release them under the GPL v.3.0 License, so feel free to use them in your OS 🙂

In action (with the dark grey and light grey terminal icons, respectively):

# Covert communication over a noisy channel

Consider a situation in which one part Alice wants to communicate with another part Bob over a discrete memoryless channel with crossover probability $p_1$  while ensuring a low level of detection in the presence of a warden Willie, who observes the communication through another discrete memoryless channel with crossover probability $p_2$. We assume that all parts are equally computationally bounded.

In this situation, we require that $p_1 < p_2$ for non-zero $p_1,p_2$. Let $\mathbf{u} \in \mathbb{F}_2^l$ be a sequence of secret bits she wants to transmit.

NOTE: Alice and Bob may on forehand agree upon any strategy they desire, but the strategy is known also to Willie.

Communicating with a secret key

If Alice and Bob share a secret key $k$, they may use it to pick a common encoding, i.e., an error-correcting code. This encoding in turn is used to encode the message, which of course must be able to correct the errors produced by the channel. Assuming that the correction capability is below the error rate of Willie’s channel, he cannot decode. Let $N$ bits be the length of the publicly transmitted sequence. An established result states that if $\sqrt N \cdot \log N$ bits are shared between Alice and Bob, they can transmit $\sqrt N$ secret bits without Willie seeing them (with high probability).

Communicating without a secret key

Now, assume that Alice and Bob do not share a common key. Alice performs the following steps:

1. Picks a cryptographically secure random vector $\mathbf r$.
2. Computes the scalar product $\mathbf r \cdot \mathbf u = k$.
3. Sends the bits $\mathbf r \| k$ over the channel.

This reduces to the problem of $\textsc{Learning Parity with Noise}$ (LPN) and can be solved with the BKW algorithm (or similar) if $p_1$ is sufficiently low. In particular, if Bob receives

$N = 4 \cdot \log 2 \cdot l \cdot \epsilon_1^{-2 (l+1)}$

such sequences, or equivalently,

$N \cdot l = 4 \cdot \log 2 \cdot l^2 \cdot \epsilon_1^{-2 (l+1)}$

bits, he will be able to decode with high probability. Here we have exploited the piling-up lemma disregrading the fact that some bits in $\mathbf{u}$ are zero and does not contribute. For some probabilities $p_1 and natural number $N$, the information is hidden from Willie. The information rate is determined as follows: $N = \mathcal{O}(l^2\cdot \epsilon_1^{-2(l+1)})$, so

$\mathcal{O}(\sqrt N)=\mathcal{O}(l \cdot\epsilon_1^{-l-1})\iff\mathcal{O}(\sqrt N\cdot\epsilon_1^{l+1})=\mathcal{O}(l)$.

This bound can be improved upon by an increase in the number of parities.

Here is a more detailed paper on a similar topic. Another paper is here.

# Solving problems with lattice reduction

Suppose that we are given a system of modular equations

$\begin{array}{rcl} a_0 \cdot x + b_0\cdot y & = & c_0 \pmod q \\ a_1 \cdot x + b_1\cdot y & = & c_1 \pmod q \\ & \ldots & \\ a_n\cdot x + b_n\cdot y & = & c_n \pmod q \\ \end{array}$

Trivially, this can be solved for unknown $x$ and $y$ using a simple Gaussian elimination step, i.e., writing the equations as

$\mathbf{A} \begin{pmatrix}x & y\end{pmatrix}^T = \mathbf{c} \iff \begin{pmatrix}x & y\end{pmatrix}^T = \mathbf{A}^{-1} \mathbf{c}$.

This is perfectly fine as long as the equations share a common modulus $q$, but what about when the equations share unknowns but are defined under different moduli? Let us take a real-world scenario from the realm of cryptography.

Example: DSA using linear congruential generator (LCG)

The DSA (Digital Signature Algorithm) has two functions $\textsf{Sign}$ and $\textsf{Verify}$. To sign a message (invoking $\textsf{Sign}$), the following steps are performed:

1. Let $H$ be a hash function and $m$ the message to be signed:
2. Generate a (random) ephemeral value $k$ where $0 < k < q$.
3. Compute $r=\left(g^{k}\bmod\,p\right)\bmod\,q$. If $r=0$, go to step 2.
4. Compute $s=k^{-1}\left(H\left(m\right)+r \cdot x\right)\bmod\,q$. If $s=0$, go to step 2.
5. The signature is $\left(r,s\right)$.

To verify as signature (invoking $\textsf{Verify}$), the below steps are performed:

1. If $0 < r < q$ or $0 < s < q$ is not satisfied, reject the signature.
2. Compute $w = s^{-1} \bmod\,q$.
3. Compute $u_1 = H\left(m\right) \cdot w\, \bmod\,q$.
4. Compute $u_2 = r \cdot w\, \bmod\,q$.
5. Compute $v = \left(g^{u_1}y^{u_2} \bmod\,p\right) \bmod\,q$.
6. If $v = r$, accept. Otherwise, reject.

In the case of using LCG as a psuedorandom-number generator for the values $k$, two consecutively generated values (we assume one signature was generated right after the other) will be correlated as $a \cdot k_1 + b = k_2 \pmod m$ for some public parameters $a,b,M$. Assuming that $M \neq q$, we obtain equations

$\begin{array}{rclc} s_1 \cdot k_1 - r_1\cdot x & = & H(m_1) &\pmod q \\ s_2 \cdot k_2 - r_2\cdot x & = & H(m_1) &\pmod q \\ -a\cdot k_1 + 1\cdot k_2 & = & c & \pmod M \\ \end{array}$

from the fourth step of $\textsf{Sign}$.

Chinese Remainder Theorem (CRT)

A well-known theorem in number theory is the Chinese Remainder Theorem (commonly referred to with the acronym CRT), which deals with simple equations over different and pairwise-prime moduli. For instance,

$\begin{array}{rcl} x & = & 2 \pmod 3 \\x & = & 3 \pmod 5 \\ x & = & 2 \pmod 7 \end{array}$

which has the solution $x = 23$. In solving actual multivariate equations, we will hit a brick wall, as needed operations such as row reduction and modular inversion does not work.

The following code takes any moduli, which do not need to be relatively prime, and computes the CRT:

def crt_lll(m, n):
p = n[0]
q = n[1]
a = m[0] % p
b = m[1] % q
L = p*q # can be tweaked

A = matrix([
[1, 1, 1/L, 0],
[p, 0, 0, 0],
[0, q, 0, 0],
[0, 0, p*q/L, 0],
[-a, -b, 0, L]
])

B = A.LLL()

assert(B[-1, 0] == 0 and B[-1, 1] == 0)

return B[-1, 2] * L

print crt_lll(
[777 % 234, 777 % 567],
[234, 567]
)


Thue’s lemma

Thue’s lemma states that given a modulus $n$ and an integer $a \in \mathbb{Z}_n$, there exists two integers $|x|,|y| \leq \sqrt{n}$ such that

$ax = y \bmod n$.

We can solve this with lattice reduction as follows:

def thues_lemma(a, mod):
'''
Find small elements x, y such that a * x = y
and |x| < sqrt(mod) and |y| < sqrt(mod).
'''
R = Zmod(mod)
A = matrix([
[1,   a],
[0, mod]
])
A = A.LLL()
return A[0][0], A[0][1]

# test
p = random_prime(2^512)
q = random_prime(2^512)
n = p * q
a = randint(0, 2**1024)
u, v = thues_lemma(a, n)
assert(a * u % n == v % n)


Lattice reduction

A lattice is a discrete linear space spanned by a set of basis vectors. For instance, $\mathbf b_1$ and $\mathbf b_2$ are two basis vectors that spans the space below. They are not unique in that sense, but evidently, they are the shortest basis vectors possible (such a basis can be found using the LLL algorithm). If the two column vectors $\mathbf b_1$ and $\mathbf b_2$ are basis vectors, then the corresponding lattice is denoted $\mathcal{L}(\mathbf B)$. Here, $\mathbf B = (\mathbf b_1 ~ \mathbf b_2)$.

The problem of finding the shortest vector is called $\textsf{SVP}$. Sloppily formulated: for a given lattice $\mathcal{L}(\mathbf B)$, find the shortest (in terms of Euclidan norm) non-zero vector. The answer to that question would be $\mathbf b_2$. Starting with different basis vectors, the problem will show be trickier.

A related problem is to find the closest vector, which is commonly called $\textsf{CVP}$. In this scenario, we are given a vector $\mathbf t$ in the vector space (but it might not be in $\mathcal{L}(\mathbf B)$) and we want to find the closest vector in $\mathcal{L}(\mathbf B)$. There is a simple reduction from $\textsf{CVP}$ to $\textsf{SVP}$ by setting $\mathbf t = \mathbf 0$. There is also a reduction in the other direction, which involves extending the basis of $\mathcal{L}(\mathbf B)$ to also include $\mathbf t$. This is called embedding.

Let us return to the example of DSA and different-moduli equations. The equations we got from before

$\begin{array}{rclc} s_1 \cdot k_1 - r_1\cdot x & = & H(m_1) &\pmod q \\ s_2 \cdot k_2 - r_2\cdot x & = & H(m_1) &\pmod q \\ -a\cdot k_1 + 1\cdot k_2 & = & c & \pmod M \\ \end{array}$

can be formulated as basis vectors. In matrix form, we have

$\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 \\ -r_2 & 0 & s_2 & 0 & q & 0 \\ 0 & -a & 1 & 0 & 0 & M\end{pmatrix} \cdot \mathbf y = \begin{pmatrix}H(m_1) \\ H(m_2) \\ c\end{pmatrix}$

or (by embedding technique):

$\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 & H(m_1) \\ -r_2 & 0 & s_2 & 0 & q & 0 & H(m_2)\\ 0 & -a & 1 & 0 & 0 & M & c \end{pmatrix} \cdot \begin{pmatrix}\mathbf y \\ -1 \end{pmatrix} = \mathbf 0$

The idea is now to force the values corresponding to the guessed values $x', k_1', k_2'$ to be close in the normed space. By adding additional constraints we may perform the following minimization:

$\min_{x,k_1,k_2}\left\|\begin{pmatrix} -r_1 & s_1 & 0 & q & 0 & 0 & H(m_1) \\ -r_2 & 0 & s_2 & 0 & q & 0 & H(m_2)\\ 0 & -a & 1 & 0 & 0 & M & c \\ 1/\gamma_x & 0 & 0 & 0 & 0 & 0 & x'/\gamma_x \\ 0 & 1/\gamma_{k_1} & 0 & 0 & 0 & 0 & k_1'/\gamma_{k_1} \\ 0 & 0 & 1/\gamma_{k_2} & 0 & 0 & 0 & k_2'/\gamma_{k_2}\end{pmatrix} \cdot \begin{pmatrix}\mathbf y \\ -1 \end{pmatrix} \right\|$

where $\gamma_x = \min(x',q-x')$$\gamma_{k_1} = \min(k_1',m-k_1')$ and $\gamma_{k_2} = \min(k_2',m-k_2')$. Finding a closest approximation (preferably, using LLL/Babai) yields a solution to the DSA equations using LCG.

Example: Knapsack

The knapsack problem (sometimes subset-sum problem) is stated as follows. Given a weight $t$ and $n$ items/weights $\{w_1,w_2,\dots,w_n\}$, find the sequence $x_1, x_2, \dots, x_n \in \{0,1\}^n$ such that

$\sum_{i=1}^n w_i \cdot x_i = t.$

It is not too hard to prove that this is NP-complete, but we omit the reduction here.

In a cryptographic setting, this can be used to encode data in the sequence $x_1, x_2, \dots, x_n$. This is called the Merkle-Hellman public-key cryptosystem. It is easy to see that encryption actually is the encoding procedure mentioned above. However, the decryption procedure is a bit more involved; in fact it requires a special class of instances. If the weights can be transformed into a super-increasing sequence, the problem becomes trivial to retrieve the sequence $x_1, x_2, \dots, x_n$.

Think of it this way. Assume that all weights sum to something smaller than the $w_n$ weight. Then, if the target sum $t$ (the ciphertext) is larger than $w_1 + w_2 + \cdots + w_{n-1}$ (if not, $t$ must be smaller assuming there is a unique solution), we know that $x_n=1$. We can now remove $x_n$ and $w_n$ from the equation by solving for the remaining weights and a recalculated $t' = t - w_n$. This procedure can be repeated until all weights have been found (or $t = 0$).

Merkle-Hellman provided a way to transform the super-increasing sequence into a hard one. We omit the details here, but the procedure can be found all over internet.

It turns out we can use lattice reduction for this problem. We create a basis matrix $\mathbf A$ in the following way

$\mathbf A = \begin{pmatrix} 1 & 0 & \cdots & 0 & w_1 \\ 0 & 1 & \cdots & 0 & w_2 \\ \vdots & \vdots & \ddots & 0 & \vdots \\ 0 & 0 & \cdots & 1 & w_n \\0 & 0 & 0 & 0 & -t\end{pmatrix}$

and perform lattice reduction on it. Since the LLL/BKZ algorithm achieves a set of short basis, it will try to find something that makes the entries of the left-most column small. What this actually means is that it will find a sum of the rows that achieves a solution to the knapsack problem.

Of course, the solution must only contain values in $\{0,1\}$. Depending on the instance, this may or may not be the case. So, how do we penalize the algorithm for choosing values outside the allow set?

A new approach*

Let us create a new basis matrix $\mathbf A'$ in the following manner:

$\mathbf A'_i = \begin{pmatrix} 1 & 0 & \cdots & 0 & w_1 & \alpha\\ 0 & 1 & \cdots & 0 & w_2 & \alpha\\ \vdots & \vdots & \ddots & 0 & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & w_n & \alpha\\0 & 0 & 0 & 0 & -t & -\alpha \cdot i\end{pmatrix}$

The algorithm performs the following steps:

1. Randomly (or deterministically) pick a guess $i$ on the number of non-zero values in the solution.
2. Update the matrix run LLL/BKZ on $\mathbf A'_i$.
3. If a satisfiable reduced-basis vector is found, return it. Otherwise, goto 1.

It does not guarantee that an incorrect solution is penalized, but it increases the probability of it (reduces the set of ‘false’ basis vectors). We omit a formal proof, but think of it this way: Assume that $\mathbf v$ is false solution and a reduced-basis vector of $\mathbf A$. In $\mathbf A'_i$ it also has to sum to the number of non-zero weights in the correct solution. Assume all false vectors appear randomly (they do not, but let us assume it!). Then for a correct guess of $i$, the probability of the vector $\mathbf v$ being is ${n \choose i} / 2^n$. If $i = \epsilon \cdot n$, this is $2^{n(H(\epsilon)-1)}$, which is a noticeable reduction for most values of $i$.

Here is an example of a CTF challenge which could be solved with the above technique.

* I do not know if this should attributed to someone.

Example: Ring-LWE

TBA

# Computing sum of divisors for numbers up to N

After fiddling with Project Euler, I started to wonder how to compute the sum of divisors in the most efficient way (albeit, most the problems in PE does not require any super-sophisticated solution). The trivial approach is doing division and adding up the divisors sequentially. This may be good enough for small $N$, but as $N$ grows this approach gives an infeasible time complexity.

The next thing one may think of is to generate a list of prime numbers (using a sieve) and for each number try all the numbers in the list. This is faster than the trivial approach, but still requires $cN \cdot |\mathcal{P}_N| = cN \cdot \pi(N) \sim \mathcal{O}(N^2/\log N)$. Since the sieve requires time $\mathcal{O}(N\cdot \log N)$, we do not need to do better than that. But can we match it? Turns out we can.

First, we need to establish a simple number theoretical fact: the $\sigma(n)$ is the divisor counting function which has the properties $\sigma(p^i) = 1 + p + \cdots p^i$ (and, thus, $\sigma(p^i) = \sigma(p^{i-1}) + p^i$) and $\sigma(p^i \cdot q^j) = \sigma(p^i) \cdot \sigma(q^j)$.

The solution is quite simple; consider the first element in the prime set $\mathcal{P}$. That is $2$. We split the range into even numbers and for even $i$ compute

$\sigma(n_i) = \sigma(2\cdot m) = \underbrace{\sigma(2)}_{\text{store this in }x_i} \cdot ~\sigma(m)$.

Then for $2^2$, compute $\sigma(2^2) = \sigma(2) + 2^2$, and so forth up to $2^i < N$. The requires $N/2 + N/4 + \cdots$ operations.

$\begin{array}{cccccccccccc} 1 & 2 & 3& 4 &5& 6 &7& 8 &9& 10 &11& 12 \\ & \sigma(2) & & \sigma(2) & & \sigma(2) & & \sigma(2) & & \sigma(2) & & \sigma(2) \\ & & & +& & &&+& & && + \\ & & & 2^2& & && 2^2& & && 2^2 \\ & & & & & &&+& & && \\ & & & & & && 2^3& & && \\ \hline & \sigma(2) & & \sigma(4) & & \sigma(2) & & \sigma(8) & & \sigma(2) & & \sigma(4) \end{array}$

Running it for all powers and primes, we use in total $N \cdot (\sum_{p\in\mathcal{P}} \frac{1}{p} + \sum_{p\in\mathcal{P}} \frac{1}{p^2} + \cdots)$ operations. Now, we have $\sum_{i > 1} \frac{1}{p^i} = \frac{1}{p(p-1)}$. So the complexity is $\mathcal{O}(N \cdot \sum_{p\in\mathcal{P}} \frac{1}{p})$. This is a harmonic series and treating it as an integral, we have $\int_{p\in\mathcal{P}} \frac{1}{p} < \log {p}$. Of course, we do only generate primes up to $N$, so the final complexity is $\mathcal{O}(N \cdot \log N)$. So we are done 🙂

As always, let us look at a Python representation:

def sum_of_divisors(N):
N += 1 # extend limit to inclusive
primes = primetools.sieve(N)
numbers = [1]*N
for q in primes:
tmp = {}
mult = q
while mult <= N:
for i in range(mult, N, mult):
if i in tmp: tmp[i] += mult
else: tmp[i] = 1 + mult
mult *= q
for j in tmp: numbers[j] *= tmp[j]
return numbers[2:]


Below is a plot of the timing of the code. Since the code stores data in memory, some undesired effects comes into play for largers values (due to the need of shuffling data back and forth) and is therefore not present in the graph.

Bounded by the sieving complexity, the algoritm cannot be any faster using the sieve. Can we do better? Note that the algorithm returns a list of sums. If we are really interested in the sum of sums, we may instead compute $\sum_{i}^{N}\sigma(i) = \sum_{i}^{N} i \cdot \lfloor \frac{N}{i} \rfloor$, which is $\mathcal{O}(N)$ and uses basically no memory. This can be written as

def sum_of_sum_of_divisors(N):
div_sum = 0
for i in range(1, N):
div_sum += i * (N / i)
return div_sum


How does it work? For each $i$, it computes the cardinality of the set of numbers divisible by $i$,i.e., $\sum_{i|j, 1 \leq j \leq N} i$, which is exactly $\lfloor \frac{N}{i} \rfloor$. This contributes with $i$ for every element, so the total contribution is $i\lfloor \frac{N}{i} \rfloor$. Doing it for all possible divisors, we end up with the sum of sum of divisors.

Turns out, we can actually do it in $\mathcal{O}(\sqrt N)$. Here is how:

def square_root_time_sum_of_sum_of_divisors(N):
global dcached
if N in dcached: return dcached[N]
div_sum, i = 0, 1
q = int(math.sqrt(N))
while i <= q:
div_sum += (i * (N / i))
i += 1
i = 1
while i <= N / (q + 1):
m = N / i
k = N / (i + 1)
div_sum += (i * (m * (m + 1) - k * (k + 1)) / 2)
i += 1
dcached[N] = div_sum
return div_sum


Consider the below example. In the left table, we have the sum $\sum_{i=1}^8 i \cdot \lfloor \frac{8}{i} \rfloor$ and in the right table $\sum_{i=1}^8T(\lfloor 8/i\rfloor)$ where $T(n) = \frac12 \cdot n(n+1)$:

$\begin{array}{c | cc cc cc cc | c} i & & & & & & & & & i \cdot \lfloor \frac{8}{i} \rfloor\\ \hline 1 & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & \bf{1} & 8 \cdot 1\\ 2 & \bf{2} & \bf{2} & \bf 2 & \bf 2 & & & & & 2 \cdot 4\\\hline 3 & 3 & 3 & & & & & & & 3 \cdot 2\\ 4 & 4 & 4 & & & & & & & 4 \cdot 2\\ 5 & 5 & & & & & & & & 5 \cdot 1\\ 6 & 6 & & & & & & & & 6 \cdot 1\\ 7 & 7 & & & & & & & & 7 \cdot 1\\ 8 & 8 & & & & & & & & 8 \cdot 1\\ \end{array} \iff \begin{array}{c | cc |cc cc cc | c} i & & & & & & & & & T(\lfloor \frac{8}{i} \rfloor)\\ \hline 1 & 1 & 2 &\bf 3 & \bf 4 & \bf 5 & \bf 6 & \bf 7 & \bf 8 & \frac12 \cdot 9 \cdot 8\\ 2 & 1 & 2 &\bf 3 & \bf 4 & & & & & \frac12 \cdot 5\cdot 4\\ 3 & 1 & 2 & & & & & & & \frac12 \cdot 3\cdot 2\\ 4 & 1 & 2 & & & & & & & \frac12 \cdot 3\cdot 2\\ 5 & 1 & & & & & & & & 1\\ 6 & 1 & & & & & & & & 1\\ 7 & 1 & & & & & & & & 1\\ 8 & 1 & & & & & & & & 1\\ \end{array}$

Clearly, it is the same table/matrix but transposed, so the sum of elements are the same. As we notice in the matrices above, their rows become sparser for increasing $i$. Think about it! What if we were to compute only the bold numbers and adding them? Then we need only two function calls each for left and right. The point where we stop computing left and start to compute right can be chosen arbitrarily, but the optimal is $\sqrt{N}$. Then $\lfloor \frac{N}{\sqrt{N}+1}\rfloor + \lfloor \sqrt{N} \rfloor \approx 2\sqrt{N}$.

Example: Project Euler 23
This is a simple problem, where we are supposed to compute all numbers which cannot be expressed as a sum of two abundant numbers. Using the above code, we can solve the problem in about 100 ms using pypy:

def abundant_numbers(N):
numbers = sum_of_divisors(N)
abundants = []
for i in range(0, N):
if 2*i < numbers[i]: abundants.append(i)
return abundants[1:]

N = 28123

abundants = abundant_numbers(N)
s = [0]*N

for x in abundants:
for y in abundants:
ls = x + y
if ls < N: s[ls] = 1
else: break

print sum(i for i in range(0, N) if not s[i])


Example: Project Euler 439
This problem is quite nice, because it requires a very well-thoughtout approach for it to be feasible solving it (it is also among the hardest problems on PE). Here is the question (notation slightly altered):

Let $\sigma(k)$ be the sum of all divisors of $k$. We define the function $S(N) = \sum_{1 \leq i \leq N}\sum_{1 \leq j \leq N} d(i \cdot j)$. For example, $S(3) =\sigma(1) +\sigma(2) +\sigma(3) +\sigma(2) +\sigma(4) +\sigma(6) +\sigma(3) +\sigma(6) +\sigma(9) = 59$

You are given that $S(10^3) = 563576517282$ and $S(10^5) \bmod 10^9= 215766508$. Find $S(10^{11}) \bmod 10^9$.

So, running our algorithm would be fine for lower values, but if we would try to store $(10^{11})^2 \approx 2^{73}$ values in memory, we will run into a wall very fast (this is about $7.5 \cdot 10^{10}$ TBs assuming that each number is a 64-bit integer – note the modular reduction which can be done in every step). Clearly, we need to use the memoryless approach.

Let us first disregard the fact that $\sigma(i \cdot j) =\sigma(i) \cdot\sigma(j)$ only holds when $\gcd(i,j) = 1$. Then $S(N) = (\sum_{i}^{N} i \cdot \lfloor \frac{n}{i} \rfloor)^2 - C(N)$. The case of $S(3)$, we will have

$\begin{array}{rcccccc} (\sigma(1) +\sigma(2) +\sigma(3))^2 = & \sigma(1)\cdot\sigma(1) &+& \sigma(1)\cdot\sigma(2) &+& \sigma(1)\cdot\sigma(3) & + \\ &\sigma(2)\cdot\sigma(1) &+& \sigma(2)\cdot\sigma(2) &+& \sigma(2)\cdot\sigma(3) & +\\&\sigma(3)\cdot\sigma(1) &+& \sigma(3)\cdot\sigma(2) &+& \sigma(3)\cdot\sigma(3)\\\\ = & \sigma(1\cdot 1) &+& \sigma(1 \cdot 2) &+& \sigma(1 \cdot 3)&+\\&\sigma(2 \cdot 1) &+& \underline{\sigma(2)\cdot\sigma(2)} &+& \sigma(2 \cdot 3)&+\\ &\sigma(3 \cdot 1) &+&\sigma(3 \cdot 2) &+& \underline{\sigma(3)\cdot\sigma(3)} \end{array}$

OK, so what about $C(N)$? We can estimate the how many numbers in the product for which it that holds $\gcd(i,j) > 1$. Take, $\sigma(p^i)\cdot\sigma(p^j) - \sigma(p^{i+j})$. We have $\sigma(p^i) = (p^{i+1}-1)/(p-1)$, so

$\sigma(p^i)\cdot\sigma(p^j) - \sigma(p^{i+j}) = \frac{(p^{i+1}-1)(p^{j+1}-1) - p^{i+j+1}-1}{p-1} = \frac{p^{i+j+2}- p^{i+j+1}-p^{i+1}-p^{j+1}}{p-1}$

In particular, $\sigma(p)\cdot\sigma(p) - \sigma(p^2) = p$. Let us consider the case $\gcd(i,j) = p$. The (negative) contribution from this set

$p \cdot \underbrace{\left[ \left(\sum_{i=2}^Ni\left\lfloor \frac{N}{i \cdot p} \right\rfloor\right)^2 - C({N}/{p})\right]}_{=S(N/p)}$.

Running over all possible gcds (all numbers $p$), we get

$S(N) = \left(\sum_{i=1}^{N} i \cdot \lfloor \frac{N}{i} \rfloor\right)^2 - \underbrace{\sum_{p=2}^N p \cdot S(\lfloor N/p \rfloor)}_{C(N)}$

The following code computes the smaller results in matter of milliseconds, but for $N = 10^{11}$, it takes too long (as we are aiming for the sub-minute mark):

cached = {}
dcached = {}
mod = 10**9

def S(N):
global cached
if N in cached: return cached[N]
dv = square_root_time_sum_of_sum_of_divisors(N)
S_sum, i = 0, 2
while i <= N:
# max recursion depth log2(10^11) ~ 33
S_sum += (i * S(N / i)) % mod
i += 1
cached[N] = (dv * dv - S_sum) % mod
return cached[N]


OK. Further optimization needed… note that

$\begin{array}{rcc} S(3) = & 1 \cdot (\sigma(1/1) +\sigma(2/1) +\sigma(3/1))^2 & - \\ & 2 \cdot (\sigma(1/2) +\sigma(2/2) +\sigma(3/2))^2 &- \\ & 3 \cdot (\sigma(1/3) +\sigma(2/3) +\sigma(3/3))^2 \end{array}$

Using the same approach as before, we can derive that

$S(N) = \sum_{i=1}^N i \cdot \mu(i) \cdot \left(\sum_{j=1}^{N} j \cdot \lfloor \frac{N}{i \cdot j} \rfloor\right)^2$

where $\mu(\cdot)$ is the Möbius function, defined

$\mu(n) = \left\{\begin{array}{rl} -1 & n \textnormal{ has an odd~number of primefactors,}\\ 1 & n \textnormal{ has an even~number of primefactors,}\\ 0 & n \textnormal{ has a squared primefactor.} \end{array}\right.$

We can compute this in $\mathcal{O}(N^{3/2})$ time. The Möbius function can be pre-computed in $\mathcal{O}(N \cdot \log N)$ like so:

def mobius(N):
L = [1]*(N+1)
i = 2
while i <= N:
# if i is not sq. free, then no multiple is
if L[i] == 1 or L[i] == -1:
L[i] = -L[i]
sq = i * i
j = i + i
while j <= N:
# we use 2 to mark non-primes
if abs(L[j]) == 2:
L[j] = -L[j]
else:
L[j] = -L[j]*2
j += i
j = sq
while j <= N:
L[j] = 0
j += sq
i += 1
i = 2
while i <= (N):
if abs(L[i]) == 2:
L[i] = L[i] / 2
i += 1
return L


# Problem of dice

The initial purpose of this post was give a proper proof of a problem posted on Twitter by @jamestanton (it is hard within the 140 char limit), but the post was later extended to cover some other problems.

Show that a four-sided and a nine-sided dice cannot be used to simulate the probability distribution of the product of outcomes when using two six-sided dice. In the original wording:

Is there a 4-sided die & a 9-sided die that together roll each of the products 1,2,3,…,30,36 w the same prob as two ordinary 6-sided dice?

We make an argument by contradiction, considering only the possible outcomes without taking the actual probabilities into account.

Obviously, to reach the same outcomes as for two normal dice $\{1,2,3,4,5,6\} \times \{1,2,3,4,5,6\}$, we need both dice to have the identity $\{1\}$ (otherwise, we will not be able to reach $1 \cdot 1 = 1$). So, $\{1,*,*,*\} \times \{1,*,*,*,*,*,*,*,*\}$.

Now, consider the prime $5$. It must be on both dice, or we would have $\mathbb{P}(5^2\cdot b)>0, b>1$. So, $\{1,5,*,*\} \times \{1,5,*,*,*,*,*,*,*\}$. Also, since $5$ appears on both dice, no dice can contain some product of the primes $\{2,3,5\}$ and their powers (e.g $2^2 \cdot 3$) that does not exist on the original dice, because then impossible products could be reached.

Hence, $6$ must be on both dice, giving $\{1,5,6,*\} \times \{1,5,6,*,*,*,*,*,*\}$. There are $6$ sides left on the larger die but we have more even products, so $2$ must also be on each die. $\{1,5,6,2\} \times \{1,5,6,2,*,*,*,*,*\}$. Now, there is no space left for $3$ on the smaller die. This means that $3^2$ must be one the larger die, but then $\mathbb{P}(3^2\cdot 5)>0$, which is a contradiction.

(@FlashDiaz gave a shorter proof)

Project Euler 205

Peter has nine four-sided (pyramidal) dice, each with faces numbered $1, 2, 3, 4$.
Colin has six six-sided (cubic) dice, each with faces numbered $1, 2, 3, 4, 5, 6$.

Peter and Colin roll their dice and compare totals: the highest total wins. The result is a draw if the totals are equal.

What is the probability that Pyramidal Pete beats Cubic Colin? Give your answer rounded to seven decimal places in the form $0.abcdefg$.

The probability functions of the nine four-sided dice and the six six-sided dice are given by the generating functions $\frac{1}{4^9} \cdot (x^1+x^2+x^3+x^4)^9$ and $\frac{1}{6^6} \cdot (y^1+y^2+y^3+y^4+y^5+y^6)^6$, respectively.

Let $X_1,...,X_9$ be i.i.d random variables taking values in the range $[1,4]$ and let $Y_1,...,Y_6$ taking values in the range $[1,6]$. We want to determine the probability that $\rho = \mathbb{P}(X_1+...+X_9 > Y_1+...+Y_6)$.

The distributions can be computed as

def rec_compute_dist(sides, nbr, side_sum):
global dist
if nbr == 1:
for i in range(1, sides+1):
dist[side_sum+i] += 1
else:
for i in range(1, sides+1):
rec_compute_dist(sides, nbr-1, side_sum+i)

dist = [0]*37
rec_compute_dist(4,9,0)
dist_49 = dist
dist = [0]*37
rec_compute_dist(6,6,0)
dist_66 = dist


To determine $\rho$, we may express it as

$\begin{array}{rl} \rho = & \sum_{t=6}^{36} \mathbb{P}(X_1+...+X_9 > t| Y_1+...+Y_6 = t)\cdot \mathbb{P}(Y_1+...+Y_6 = t) \\\\ = & \sum_{t=6}^{36} \mathbb{P}(X_1+...+X_9 > t)\cdot \mathbb{P}(Y_1+...+Y_6 = t) \end{array}$.

Computing the sum using the following code,

probability = 0
for i in range(6,36+1):
for j in range(i+1,36+1):
probability += dist_66[i]*dist_49[j]

print 1.0 * probability/(6**6 * 4**9)


we obtain the answer. Great 🙂

Project Euler 240

There are $1111$ ways in which five six-sided dice (sides numbered $1$ to $6$) can be rolled so that the top three sum to $15$. Some examples are:

$\begin{array}{rcl} D_1,D_2,D_3,D_4,D_5 &=& 4,3,6,3,5\\ D_1,D_2,D_3,D_4,D_5 &=& 4,3,3,5,6\\ D_1,D_2,D_3,D_4,D_5 &=& 3,3,3,6,6\\ D_1,D_2,D_3,D_4,D_5 &=& 6,6,3,3,3 \end{array}$

In how many ways can twenty twelve-sided dice (sides numbered $1$ to $12$) be rolled so that the top ten sum to $70$?

Let us first consider the simpler problem $\left\{ d_1+..+d_{10}=70 \right\}$. If we restrict the remaining ten dice to be less than or equal to the minimum value of the ten dice, we then can compute the cardinality. Let $n_i$ denote the number of $i$‘s we got. Then, $n_1 \cdot 1 + n_2 \cdot 2 + ... + n_{12} \cdot 12 = 70$ where $n_1 + n_2 + ... + n_{12} = 10, n_i \geq 0$.

All histograms of top-ten dice can be computed with

from copy import copy
d = [0] * 12
possible = []

def rec_compute(i, j, sum):
global d
if j == 0:
if sum == 70:
possible.append(copy(d))
return
while i > 0:
if sum + i <= 70:
d[i - 1] += 1
rec_compute(i, j - 1, sum + i)
d[i - 1] -= 1
i -= 1

rec_compute(12, 10, 0)


The code exhausts all solutions in 200ms. Call any solution $H$. For instance  H = [0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0]. The remaining dice can take any values in the range $[1, j]$, where $j$ is the left-most non-zero index (starting from $1$). The number of configurations for this particular solution is then given by

$20! \cdot \left((10+H_7)!H_6!H_5!H_4!H_3!H_2!H_1!\right)^{-1}$, where $\sum^7_1 H_i = 10$.

Unfortunately, there is no good analytical way of computing this. So, the easiest way is to enumerate all possible $H_i$. Disregarding $H_7$, we compute all permutations of a given histogram in the same way (hence, we can make the search space a lot smaller) and the using the multiplicity to determine the exact number. All and all, the following code gives our answer:

def configurations(i, j, x, s, l):
if sum(x) == s:
# we can stop here as the sum cannot get smaller
multiplicity = fact(l) / fact(l-len(x)) / \
reduce(lambda m, n: m * n, \
[fact(y) for y in \
Counter(x).values()])

return fact(DICE) * multiplicity / \
reduce(lambda m, n: m * n, \
[fact(y) for y in x])

if j == 0 or i == 0: return 0
return configurations(i-1, j, x, s, l) + \
configurations(i, j-1, x + [i], s, l)

S = 0
for H in possible_top_dice:
min_index = next((i for i, \
x in enumerate(H) if x), None)

for j in range(0, REMAINING+1):
u = reduce(lambda m, n: m * n, \
[fact(y) for y in H])

if j < REMAINING:
q = configurations(REMAINING-j, min_index, \
[], REMAINING-j, min_index) / u
else:
q = fact(DICE) / u
H[min_index] += 1
S += q
print S


# BKPCTF16 – HMAC_CRC

We are given the following implementation of a HMAC where the hash function has been replaced by a CRC function.

CRC_POLY = to_bits(65, (2**64) + 0xeff67c77d13835f7)

def crc(mesg):
mesg += CONST
shift = 0
while shift < len(mesg) - 64:
if mesg[shift]:
for i in range(65):
mesg[shift + i] ^= CRC_POLY[i]
shift += 1
return mesg[-64:]

INNER = to_bits(8, 0x36) * 8
OUTER = to_bits(8, 0x5c) * 8

def xor(x, y):
return [g ^ h for (g, h) in zip(x, y)]

def hmac(h, key, mesg):
return h(xor(key, OUTER) + h(xor(key, INNER) + mesg))


We are given two messages PLAIN_1 = "zupe zecret"; PLAIN_2 = "BKPCTF", of which HMAC_1 = hmac(crc, key, bin(PLAIN_1)), where bin() is a binary vectorial representation.
First, we note that the CRC operation is actually interpretable as a simple modular polynomial expression. Also, every binary sequence can be written as a polynomial $\mathbb{F}_2[x] / P(x)$. We use the following mapping: $H(x)$HMAC_1; $M(x)$mesg; $C(x)$, CONST$O(x)$ , OUTER; $I(x)$ , INNER; $P(x)$CRC_POLY and $K(x)$key).

$[M(x)x^{64} + C(X)] \mod P(x).$

Consequently, the HMAC operation can be written as

$H(x) = ([K(x)+O(x)]x^{64} + [K(x)+I(x)]x^n + M(x))x^{64} + C(x))x^{64} + C(x) \mod P(x).$

Secondly, we note that can move out the $K(x)$ from the expression, i.e.,

$H(x) = \underbrace{([O(x)]x^{64} + [I(x)]x^n + M(x))x^{64} + C(x))x^{64} + C(x)}_{\textnormal{Independent of the key}} + K(x)(x^{128}+x^{128+n}) \mod P(x).$

With some simple algebra, we can now determine the key $K(x)$ by rearranging the above equation

$\left(H(X) + ([O(x)]x^{64} + [I(x)]x^n + M(x))x^{64} + C(x))x^{64} + C(x)\right)(x^{128}+x^{128+n})^{-1} = K(x) \mod P(x).$

Using Sage, the we can easily do these computations:

def bits_to_hex(b):
return hex(from_bits(b)).rstrip("L")

def from_bits(N):
return int("".join(str(i) for i in N), 2)

P.<x> = PolynomialRing(GF(2))
I = P.ideal([x^64+x^63+x^62+x^61+x^59+x^58+x^57+x^56+x^55+x^54+x^53+x^52+x^50+x^49+x^46+x^45+x^44+x^43+x^42+x^38+x^37+x^36+x^34+x^33+x^32+x^31+x^30+x^28+x^24+x^21+x^20+x^19+x^13+x^12+x^10+x^8+x^7+x^6+x^5+x^4+x^2+x^1+1])
S = P.quotient_ring(I)
In =  x^61+x^60+x^58+x^57+x^53+x^52+x^50+x^49+x^45+x^44+x^42+x^41+x^37+x^36+x^34+x^33+x^29+x^28+x^26+x^25+x^21+x^20+x^18+x^17+x^13+x^12+x^10+x^9+x^5+x^4+x^2+x^1
Out =  x^62+x^60+x^59+x^58+x^54+x^52+x^51+x^50+x^46+x^44+x^43+x^42+x^38+x^36+x^35+x^34+x^30+x^28+x^27+x^26+x^22+x^20+x^19+x^18+x^14+x^12+x^11+x^10+x^6+x^4+x^3+x^2
M =  x^86+x^85+x^84+x^83+x^81+x^78+x^77+x^76+x^74+x^72+x^70+x^69+x^68+x^62+x^61+x^58+x^56+x^53+x^46+x^45+x^44+x^43+x^41+x^38+x^37+x^34+x^32+x^30+x^29+x^25+x^24+x^22+x^21+x^20+x^17+x^14+x^13+x^10+x^8+x^6+x^5+x^4+x^2
Hmac =  x^63+x^61+x^58+x^56+x^54+x^53+x^52+x^51+x^50+x^48+x^46+x^41+x^40+x^39+x^37+x^29+x^28+x^25+x^23+x^22+x^21+x^20+x^19+x^18+x^17+x^15+x^13+x^12+x^9+x^7+x^2+x^1
CONST = x^63+x^61+x^59+x^57+x^56+x^55+x^53+x^51+x^50+x^48+x^47+x^46+x^44+x^43+x^42+x^41+x^39+x^37+x^35+x^34+x^32+x^31+x^29+x^28+x^27+x^26+x^25+x^23+x^22+x^21+x^19+x^18+x^17+x^16+x^12+x^11+x^10+x^8+x^7+x^6+x^5+x^3+x^1
Y = Hmac + ((Out)*x^64 + ((In)*x^88 + M)*x^64 + CONST)*x^64 + CONST
X = (x^128+x^(128+88))
Xinv = X.xgcd(x^64+x^63+x^62+x^61+x^59+x^58+x^57+x^56+x^55+x^54+x^53+x^52+x^50+x^49+x^46+x^45+x^44+x^43+x^42+x^38+x^37+x^36+x^34+x^33+x^32+x^31+x^30+x^28+x^24+x^21+x^20+x^19+x^13+x^12+x^10+x^8+x^7+x^6+x^5+x^4+x^2+x^1+1)[1]
K = S(Xinv*Y)
Kbin = [K[i] for i in range(0,64)]
Kbin.reverse()
print "The key is", bits_to_hex(Kbin)


and obtain the key 0xae5617703acedc88. Substituting the key into the code, we get the flag BKPCTF{0xd2db2b8b9002841f}.

# PlaidCTF 2015 : Strength

This challenge was one of the crypto related tasks during PlaidCTF 2015. The attacker (you) is given a set of public keys and corresponding ciphertexts of the same message. The task is to recover the plaintext (flag).

$\left\{ \begin{array}{ccc} c_1 & = & m^{e_1} \bmod{N} \\ c_2 & = & m^{e_2} \bmod{N} \\ & \vdots & \\ c_k & = & m^{e_k} \bmod{N}\end{array} \right.$

A fairly trivial observation is as follows: if we succeed to find a linear combination such that $\sum_{i=1}^k a_ie_i = 1, a_i \in \mathbb{Z}$, then it can be used to unravel the message, i.e.,

$\prod_{i=1}^k(m^e_i)^{a_i} = m^{\sum_{i=1}^k a_ie_i} = m$

In fact, it suffices to find two relatively prime $e_i$ and $e_j$. Given these exponents, we can use the Euclidian algorithm without modification to find a linear combination that is $\pm 1$. Scanning through the list, we find that two exponents, $\texttt{0x6b8a5ae7}$ and $\texttt{0x4042c3955}$, are relatively prime, i.e.,

$-a_ie_i + a_je_j = \texttt{-0xe860d9d9*0x6b8a5ae7 + 0x184e2990*0x4042c3955} = 1.$

So,

$(m^{e_i})^{-a_i}\cdot (m^{e_j})^{a_j} = (c_i^{-1})^{a_i}\cdot(c_j)^{a_j} = m.$

This could be realized in code in the following way:


def egcd(a, b):
if a == 0:
return (b, 0, 1)
else:
g, y, x = egcd(b % a, a)
return (g, x - (b // a) * y, y)

def modinv(a, m):
g, x, y = egcd(a, m)
if g != 1:
raise Exception('modular inverse does not exist')
else:
return x % m

def modexp(a, n, m):
bits = []
while n:
bits.append(n%2)
n /= 2
solution = 1
bits.reverse()
for x in bits:
solution = (solution*solution)%m
if x:
solution = (solution*a)%m
return solution

B = 0x8caeaa7d272f9606fee9222efd1d922143db738b95bd64746b27bc4c0fd979a2c57b4735131a4391a81bf5f0c0c8eea41d4f91bed4d17784b1956fd89882b97c98009051ac3a03964499c864524d3ddc10299c0290e91707b62ce89b118afe558151be39d61de0483def52c6cb546132ecab85143715bc593a2892b1e41b37b9L

exp1 = 0x6b8a5ae7
exp2 = 0x4042c3955

_,x,y = egcd(exp1,exp2)
print hex((modexp(modinv(A,N),-x, N) * modexp(B,y, N) ) % N)


from which we obtain

$\texttt{0x666c61675f537472656e6774685f4c6965735f496e5f446966666572656e636573L},$

or in ascii ‘flag_Strength_Lies_In_Differences’.

# Understanding the disclosure attack

Techniques for providing anonymity in communication has a long history. In the 1980’s, David Chaum published novel techniques based on mix networks (or mixnets), which had several positive ramifications on cryptography as a whole. A mixnet can be described as a box, which accepts messages from multiple senders. The messages are shuffled, and then propagated through the network in random order to the recipients.

An $n$-to-$n$ mixnet.

Assume that a user, Alice, wants to send a message $M$ through a mixnet. Then, the following procedure is performed.

1. Introduce randomness $R_0$ to the message.
2. Encrypt the message (including the randomness $R_0$) using the public key $K$ of the recipient.
3. Include the address of the recipient, denoted $A$ and some additional randomness $R_1$.
4. Encrypt what is obtained in the previous step with the public key $K_m$ of the mixnet.

The message that Alice sends then is $E_{K_1}(R_1,E_K(R_0,M),A)$. The following is then performed by the mixnet:

1. The mixnet decrypts using its private key.
2. The mixnet removes randomness $R_1$.
3. Extracts the adress of the recipient, i.e., $A$, and sends $E_K(R_0,M)$ to that address.

The purpose of the randomness $R_0$ is to avoid direct guessing attacks on the encryption used between sender and recipient – for instance, if the mixnet is malicious. Moreover, the randomness $R_1$ asserts that the address $A$ remains known only to the mixnet, Alice and obviously, the recipient. The use of mixnets can, and probably should, be iterated in several steps.

The disclosure attack is an attack that targets anonymity systems based on Chaum mixnets and was introduced by Kedogan, Agrawal and Penz in their seminal paper [1]. The adversary is given access to observe incoming and outgoing messages and which senders and recipients that are act in the communication.

The attack is performed under the assumption that a user Alice has $m$ partners which are regularly communicated with, and the aim is to unravel the recipients. Denote the total number of users in the system $N$, the number of senders in each batch is denoted $b$ (where $1), and the number of recipients in a batch is denoted $n$. All senders in a batch are assumed to be distinct, i.e., a sender can only send one message. Then, $n \leq b$ since more than one sender can send messages to the same recipient. It is assumed that all senders uniformly chooses their recipient among the set of their partners in each communication.

The attack is divided into two phases, the learning phase and the excluding phase.

1. In the learning phase the adversary records all receivers in a batch when a message from Alice is sent. This set of recipients is denoted by $\mathcal R_i$ for some $0 < i \leq m$.

Whenever Alice is active, the recipients are recorded and put into $\mathcal R_t$. .

The process is repeated until the adversay as $m$ mutually disjoint sets, i.e., until

$\mathcal R_i \cap \mathcal R_j = \emptyset, \quad \forall i \neq j.$

Provided that we obtain $m$ sets, each set $\mathcal R_i$ contains one and only one partner of Alice. Here is a pitfall: if the sets are sampled sequentially, some set may contain two of Alice’s partners and we will not be able to find $m$ sets.

Expressed in Python code, it looks something like the following.

# The universe of all possible destination addresses
U = set(xrange(N))

# Alice's communicating partners
P = set(random.sample(U,m))

# Pick (with duplicates) a random set with at least one element from P
R = set(random.sample(P,1)).union(set([random.randint(0,N-1) for i in xrange(n-1)]))

# Begin learning phase
list_of_R = [R]
flag = 1
for i in xrange(m-1):
while 1:
R = set(random.sample(P,1)).union(set([random.randint(0,N-1) for i in xrange(n-1)]))
flag = 0
for L in list_of_R:
if len(L.intersection(R)) != 0:
flag = 1
break
if flag == 0:
list_of_R.append(R)
break

2. The attack now proceeds to the excluding phase. A new observation $\mathcal R$ is recorded (again, when Alice is known to communicate). If the following is satisfied

$\mathcal R \cap \mathcal R_i \neq \emptyset$ and $\mathcal R \cap \mathcal R_j = \emptyset, \quad \forall i \neq j$,

then $\mathcal R_i$ is replaced by $\mathcal R \cap \mathcal R_i$. With high probability, this is reduce the cardinality of $\mathcal R_i$. This procedure is repeated until all sets $\mathcal R_i, 0 < i \leq m$ have cardinality 1, i.e., contain only one recipient. Of course, since all sets were disjoint, the communicating parters of Alice have been unraveled.

Again, expressed in Python code, we have the following.

# Excluding phase
oflag = 1
while oflag == 1:
oflag = 0
flag = 1
for i in xrange(m):
R = set(random.sample(P,1)).union(set([random.randint(0,N-1) for k in xrange(n-1)]))
L = list_of_R[i]
if len(L.intersection(R)) != 0:
flag = 0
for j in set(xrange(m)).difference({i}):
if len(list_of_R[j].intersection(R)) != 0:
flag = 1
if flag == 0:
list_of_R[i] = L.intersection(R)
if len(list_of_R[i]) &gt; 1:
oflag = 1

# Verify that the found set is the set P
print P == set.union(*list_of_R)


There is an obvious limitation to the attack. If

$m > \lfloor N/m \rfloor,$

then it is not possible to find $m$ disjoints sets in the learning phase.

Another take at a disclosure attack

Under the assumptions from before, it is very simple to formulate a statistical attack that is much simpler to execute (and is considerably faster).

By using the recorded observations (when Alice communicates), we may simple make a histogram of the communication in the batches. Let $c$ be the number of batches recorded, then the expected number of recorded communications with a non-parter of Alice is $c(n-1)/N$, while for a partner it is $c(1/m + (n-1)/N)$.

In the below graph, $c = 1000, n = 9, m = 9$. Clearly, determining the communicating partners of Alice is not hard.

This very simple attack construction performs better than [1] and does also work for $m > \lfloor N/m \rfloor$. The assumptions made in [1] are also a bit unrealistic, in particular the assumption about a uniform recipient distribution among all other communicating parties.

[1] D. Kedogan, D. Agrawal, and S. Penz. Limits of anonymity in open environments. In Revised Papers from the 5th International Workshop on Information Hiding, volume 2578 of Lecture Notes in Computer Science, pages 53–69. Springer-Verlag, 2002.