# Finding magic hashes with hashcat

During the last months, the problem of finding magic hashes for various hash functions has had a renaissance. After this tweet, the over-a-decade-old problem received new attention from a few people in the security community and, as a consequence, a flurry of new magical hashes were found.

Magical hashes have their roots in PHP. In particular, it all revolves around the improper use of the equality operator ==. PHP uses implicit typing, e.g., if a string has the format 0e[digits], then it is interpreted as the scientific notation for $0 \times 10^{\texttt{[digits]}}$. This has some undesired side effects: when comparing hashes from password input it causes a small but non-negligible subset of elements that are all equal under the == operation.

The probability of a hash being magical is easy to compute if we can assume that the hash function behaves completely at random. Let the $H : A \rightarrow B$ be a hash function with $l$-hexdigit long output. If we disregard all but the ones starting with 0e..., we have

$\mathbb{P}(H(x) \text{ is magical}) = \left(\frac{1}{10}\right)^2 \cdot \left(\frac{10}{16}\right)^l$

for an $x$ randomly selected over all strings of any length.

The inverse probability amounts to the number of trials (or hash-function oracle calls) required on average to find a magic hash. For instance, SHA-256, which has an output length corresponds to about $2^{50}$ calls.

Modifying hashcat kernels

We will take SHA-224 as an example, since it is a bit easier to find magic hashes for. First, we are going to define two masks:

The first one returns 0 if the hex representation contains nothing but decimal digits:

#define  IS_MAGIC(x)    ((x & 0x88888888) & (((x & 0x22222222) >> 1 | (x & 0x44444444) >> 2) & ((x & 0x88888888) >> 3)) << 3)


The second one does the same as the first, but it forces the first two hex digits to 0e.

#define  IS_MAGIC_H(x)  ((x & 0x00888888) & (((x & 0x00222222) >> 1 | (x & 0x00444444) >> 2) & ((x & 0x00888888) >> 3)) << 3) | ((x & 0xff000000) ^ 0x0e000000)


We add these lines to inc_hash_sha224.h. Then, we locate the function sha224_final_vector in inc_hash_sha224.cl.

DECLSPEC void sha224_final_vector (sha224_ctx_vector_t *ctx)
{
MAYBE_VOLATILE const int pos = ctx->len & 63;
append_0x80_4x4 (ctx->w0, ctx->w1, ctx->w2, ctx->w3, pos ^ 3);
if (pos >= 56)
{
sha224_transform_vector (ctx->w0, ctx->w1, ctx->w2, ctx->w3, ctx->h);
ctx->w0[0] = 0;
ctx->w0[1] = 0;
ctx->w0[2] = 0;
ctx->w0[3] = 0;
ctx->w1[0] = 0;
ctx->w1[1] = 0;
ctx->w1[2] = 0;
ctx->w1[3] = 0;
ctx->w2[0] = 0;
ctx->w2[1] = 0;
ctx->w2[2] = 0;
ctx->w2[3] = 0;
ctx->w3[0] = 0;
ctx->w3[1] = 0;
ctx->w3[2] = 0;
ctx->w3[3] = 0;
}
ctx->w3[2] = 0;
ctx->w3[3] = ctx->len * 8;
sha224_transform_vector (ctx->w0, ctx->w1, ctx->w2, ctx->w3, ctx->h);

ctx->h[0] = IS_MAGIC_H(ctx->h[0]); // first block starts with 0e
ctx->h[1] = IS_MAGIC(ctx->h[1]);
ctx->h[2] = IS_MAGIC(ctx->h[2]);
ctx->h[3] = IS_MAGIC(ctx->h[3]);
ctx->h[4] = IS_MAGIC(ctx->h[4]);
ctx->h[5] = IS_MAGIC(ctx->h[5]);
ctx->h[6] = IS_MAGIC(ctx->h[6]);
}


Finally, we need to modify the corresponding module m01300_a3_pure.cl. More specifically, we modify the following function:

KERNEL_FQ void m01300_sxx (KERN_ATTR_VECTOR ())
{
/**
* modifier
*/

const u64 lid = get_local_id (0);
const u64 gid = get_global_id (0);

if (gid >= gid_max) return;

/**
* digest
*/

const u32 search[4] =
{
digests_buf[digests_offset].digest_buf[DGST_R0],
digests_buf[digests_offset].digest_buf[DGST_R1],
digests_buf[digests_offset].digest_buf[DGST_R2],
digests_buf[digests_offset].digest_buf[DGST_R3]
};

/**
* base
*/

const u32 pw_len = pws[gid].pw_len;

u32x w[64] = { 0 };

for (u32 i = 0, idx = 0; i < pw_len; i += 4, idx += 1)
{
w[idx] = pws[gid].i[idx];
}

/**
* loop
*/

u32x w0l = w[0];

for (u32 il_pos = 0; il_pos < il_cnt; il_pos += VECT_SIZE)
{
const u32x w0r = words_buf_r[il_pos / VECT_SIZE];

const u32x w0 = w0l | w0r;

w[0] = w0;

sha224_ctx_vector_t ctx;

sha224_init_vector (&ctx);

sha224_update_vector (&ctx, w, pw_len);

sha224_final_vector (&ctx);

const u32x r0 = ctx.h[0];
const u32x r1 = ctx.h[1];
const u32x r2 = ctx.h[2];
const u32x r3 = ctx.h[3];
const u32x r4 = ctx.h[4];
const u32x r5 = ctx.h[5];
const u32x r6 = ctx.h[6];

// this is one hacky solution
if ((r4 == 0) && (r5 == 0) && (r6 == 0)) {
COMPARE_S_SIMD (r0, r1, r2, r3);
}
}
}


This assumes no vectorization. Now build hashcat. Since all magic hashes will be mapped to the all-zero sequence, we run it as follows:

$./hashcat.bin -m 1300 00000000000000000000000000000000000000000000000000000000 -a 3 --self-test-disable --keep-guessing --keep-guessing causes hashcat to continue looking for hashes, even after one has been found. The --self-test-disable is needed, since we have modified how SHA-224 functions, and the testcases defined in hashcat will consequently fail. Running it on AWS TBA? Results Relatively quickly, we found several magic hashes using this modification for SHA-224: noskba6h0:0e615700362721473273572994672194243561543298826708511055 Ssrodxcm0:0e490606683681835610577024835460055379837761934700306599 i04i19pjb:0e487547019477070898434358527128478302010609538219168998 Fuq0guvec:0e469685182044169758492939426426028878580665828076076227 Sjv2n8fjg:0e353928003962053988403389507631422927454987073208369549 L0gg4bvt5:0e730381844465899091531741380493299916620289188395999379 7v2k2to5o:0e676632093970918870688436761599383423043605011248525140  For SHA-256: Sol7trnk00:0e57289584033733351592613162328254589214408593566331187698889096  For a more comprehensive list of hashes, refer to this Github repo. Thanks to @0xb0bb for providing crucial help with computational resources. Advertisements # On the hardness of finding minimal-weight codewords in a QC-MDPC code Problem 1 (Knapsack) We want to maximize the value of a subset of items such that the weight of the set is no larger than weight $W$. $\sum_{i \in \mathcal{S}} w_i x_i \leq W$ $\max \sum_{i \in \mathcal{S}} v_i x_i$ Let us first study another problem. It is easy to see that this problem is at least as hard as Problem 1. Problem 2 (Auxiliary problem) $\max x_1x_2x_3 + x_2x_5 + x_4x_6x_7 + \cdots$ $\|\begin{matrix} x_1 & x_2 & \cdots & x_k \end{matrix} \|_1 = m, x_i \in \{0, 1\}$ We can solve Problem 1 using an oracle for Problem 2. For a knapsack instance with an item of weight $w_i$ and weight $v_i$, we create a $v_i$ terms $\cdot x_1x_2\cdots x_{w_i}$. The $x_i$ must be unique for every item. This is done for every item in the knapsack. So, Problem 2 is NP-hard. Now, we want to build some theory towards polynomials. Consider the following construction. Assign each $x_i$ with a corresponding monomial $(x - a_i)$. Now assume that we want to represent the term $x_1 x_2 x_3$. Define $p(x) = x^p + 1 \in \mathbb{F}_q$ to be a product $\prod_{i=0}^p(x-a_i)$. $p$ is a prime number. We map the term $x_1 x_2 x_3$ to the polynomial $Q_1(x) = p(x) \cdot \left[(x-a_1)(x-a_2)(x-a_3)\right]^{-1}$ This means that $Q_1(x) \cdot (x-a_1)(x-a_2)(x-a_3) = 0 \bmod p(x)$. To consider all terms jointly, pick a number large number $d$. Then, we create a polynomial using the terms $Q_2(x) = p(x) \cdot \left[(x-a_2)(x-a_5)\right]^{-1}$$Q_3(x) = p(x) \cdot \left[(x-a_4)(x-a_6)(x-a_7)\right]^{-1}$ and so on. We assume there are $n$ terms. Then, the polynomial $Q(x)$ is constructed as $Q(x) = \sum_{i=1}^n x^{di} \cdot Q_i(x)$. Since $d$ was a large number, it means that the terms of $Q_i(x)$ and $Q_{i+1}(x)$ will not interfere. We now turn this problem into an instance of finding a low-weight codeword in a QC-MDPC code. Problem 3 (Low-weight codeword of QC-MDPC code) Let $G$ be the generator of a QC-MDPC code. Find a non-zero codeword with minimal weight, i.e., minimize $\| \mathbf u G \|_1$. There is a bijective mapping from a quasi-cyclic code to a polynomial, i.e., $G(x)$ generates a code as well. So, $Q(x)$ generates a quasi-cyclic code. Let us first assume we have access to an oracle for Problem 2. Indeed, a optimal solution to Problem 2 will give rise to a codeword in the corresponding code of low weight. Assume that the optimum is $x_1x_2x_3 + x_2x_5 + x_4x_6x_7 + \cdots = b$. This means that $u(x)$ is of degree $m$ and that $\|u(x) G(x)\|_1 \leq p(n-b)$, i.e., $u(x) G(x)$ has at most $p (n-b)$ non-zero coefficients. Using an oracle for Problem 3, we find a minimal-weight codeword for the generator $Q(x)$. # Experiments with index calculus In index calculus, the main problem is to represent powers of $g$ in a predefined prime-number basis. We are interested in unravel $x$ from $h = g^x\bmod p$. Normal approach First, we find some offset $hg^j = g^{x+j}\bmod p$ such that the factorization is $B$-smooth. Using the prime-number basis $(-1)^{e_1}2^{e_2}\cdots$, generate a corresponding vector $\mathbf y = \begin{pmatrix}1 & 0 & 3 & 0 & \cdots & 1\end{pmatrix}$ Then, we generate powers $g^1, g^2, ...$ and check if they have the same property. The results are put into a matrix after which one performs Gaussian eliminiation over $\mathbb{Z}_{p-1}$. $A | \mathbf v^T = \left( \begin{array}{cccccc|cc}0 & 3 & 2 & 2 & \cdots & 1 & \mathbf{13} \\ 1 & 0 & 2 & 1 & \cdots & 0 & \mathbf{112} \\ 0 & 5 & 2 & 0 & \cdots & 0 & \mathbf{127} \\\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\ 1 & 1 & 0 & 0 & \cdots & 0 & \mathbf{67114}\end{array}\right)$ $\mathbf xA = \mathbf y$. Then compute $\mathbf x \cdot \mathbf v$ to find $x+j$. Different approach Almost like in the normal approach, we find some offset $hg^j = g^{x+j} \bmod p$ such that the factorization of at least fraction of $hg^j \bmod p$ greater than $\sqrt{p}$ is $B$-smooth. Again, Using the prime-number basis $(-1)^{e_1}2^{e_2}\cdots$, generate a corresponding vector $\mathbf y | \Delta = \left(\begin{array}{cccccc|c}0 & 2 & 5 & 1 & \cdots & 0 & \Delta\end{array} \right)$ The vector is chosen such that $\Delta$ corresponds to the product of primes not represented in the chosen basis. In other words, $(-1)^0 \cdot 2^2 \cdot 3^5 \cdot 7^1 \cdots > \sqrt{p}$ for the vector above, or equivalently, $\Delta < \sqrt p$. Again, we generate powers $g^1, g^2, ...$ and check if they have the same property as above and solved as the normal approach. $A | \mathbf v^T | \mathbf d^T = \left( \begin{array}{cccccc|c|c} 1 & 0 & 3 & 0 & \cdots & 0 & \mathbf{5} & \delta_1 \\ 0 & 3 & 2 & 2 & \cdots & 1 & \mathbf{13}& \delta_2 \\ 0 & 1 & 2 & 1 & \cdots & 0 & \mathbf{65}& \delta_3 \\ \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots\\ 0 & 2 & 0 & 0 & \cdots & 0 & \mathbf{13121}& \delta_B \end{array}\right)$ Find $\mathbf xA = \mathbf y$. Then compute $\mathbf x \cdot \mathbf v$ to find $x+j$. There is a catch here. It will be correct if and only if $\prod \delta_i ^{x_i} \bmod p = \Delta.$ It remains to see if this actually improves over the normal approach. # Suggestion for proof of retrievability (Originally posted here) Scenario In this scenario, $A$ and $B$ wants to distribute segments of their data (preferably encrypted). We do not care about coverage, we are trying to maximize the amount of remotely stored data. The game is essentially that any peer, will try to minimize their own storage requirements and thus maximize other peers’. Protocol Two peers $A$ and $B$ will repeatedly negotiate upon storage space according to the follwing protocol, where $A$ is prover and $B$ verifier. Let $S(B,A)$ be the set of segments owned by $B$ and stored by $A$. Moreover, let $s \in S(B,A)$ be an abritrary segment and let $\text{sig}(s)$ denote the signature of $s$. 1. $B$ picks a random string $r$ of length $n$ bits (equal to the length of the signature) 2. $B$ asks $A$ to send the closests segment from $S(B,A)$, i.e., such that $\| r - \text{sig}(s) \|_1$ is minimized. 3. $B$ verifies the segment $s$ via the signature. Then, using the distance $\| r - \text{sig}(s) \|_1$, $B$ can estimate the number of segments stored by $A$. Repeat until the variance is low enough. The procedure is performed both ways. Approximating $\delta$ Intuitively, we can think of $S(B,A)$ as a random (non-linear) code $\mathcal{C}$ over $\mathbb{F}_2^n$ and the task is to find a minimum-weight codeword in $\mathcal{C} + r = \{ c + r | c \in \mathcal C\}$. Then $N$ corresponds to the number of codewords in $\mathcal{C}$. Assume that A has stored $N$ segments. Without loss of generality, we can assume that $r$ is the all-zero string, hence transforming the problem to finding the minimum weight of$n$binomial variables. Let $X_1, X_2, \dots, X_N \in \textsf{Bin}(n, \frac12)$ and $Z := \min(X_1, X_2, \dots, X_n)$. Then with high probability and for larger values of $N$, $\frac n2 - \sqrt{\frac n3 \log N} \geq E(Z) \geq \frac n2 - \sqrt{\frac n2 \log N}.$ The above can be obtained by Gaussian approximation and the conventional expectation of the minimum of Gaussian random variables. We omit the proof here. Of course, this can be pre-computed for reasonable values of $N$. Assume there exists a peer $A$ with infinite computing power that tries to minimize its required storage for a given minimum distance $d$. Done in an optimal manner, $A$ will pick equidistant points in the space $\mathbb{F}_2^n$. This corresponds to a perfect error-correcting code with minimum distance$d\$. The covering-coding bound states that $d\leq n-k$. For a perfect code, which has minimal size, we have $d = n-k$. Therefore, the size of the code is $|\mathcal C| = 2^{n-d}$, which is also the number of elements $A$ needs to store.

A glaring problem is that $A$ cannot arbitrarily pick points (since points must be valid signatures). The best $A$ can do is to decide upon a perfect code $\mathcal{C}$ (or pick one that is closest to the current set of points — which is a very hard problem). Then, $A$ will look for points $v$ and $v'$ that map to the same codeword $c \in \mathcal{C}$ and discard the point that is the farthest away from $c$. For larger $n$ and smaller $d$, it is not realistic to achieve the covering-coding bound since there in reality are not that many shared segments. So, even given infinite computation, $A$ will not be able to sample points until a perfect code is achieved.

The covering-coding bound serves as a crude estimation on how to set the parameters.

Simulating

In Figure $1$, we see that the upper bound for expected minimum distance approximates $N$ for values larger than $2^7$ and $n = 256$.

Assume that $A$ send a signature $s$ along with the segment. Let

$\delta(r,s) = \|r - \text{sig}(s)\|_1 - \frac n2.$

$B$ can now use different bounds at its own choice. Assume that $\delta(r,s) = 100 - \frac{256}{2}.$ Then, using the expression

$N \approx \exp\left(-\frac 2n \cdot \delta(r,s) \cdot \text{abs}\left(\delta(r,s)\right) \right)$

$B$ determines that $A$ most likely has at least $458$ segments with high probability. Using the lower bound, $A$ has at least $9775$ segments.

So, what is all this?

The intention is to create a simplistic protocol for asserting that shared data actually is retrievable, based on a probabilistic model. This could serve as basis for shared storage, in which segments are distributed. Even if the storer does not have access to the segments it has distributed, it can use the protocol to verify that it can retrieve them without actually doing so.

# Project Tuffe Part 3

Throttle design

The paradigm is minimalistic design. No unnecessary parts, it is just a handle which goes in two directions. Forward (left) is 5V, up is neutral (2.4 – 2.6 V) and back is 0V. The lever controls the 5 kΩ potentiometer via a 2:1 cog gear. The intention is to be able to control the force needed to change the thrust, but also move the applied force from the mechanical parts in the potentiometer to the handle.

Concept with power indicator

Base mount

The mount is being made from thick stainless steel and is designed to allow for a few degrees of freedom. An elastic axial connection is used to reduce vibration. It is modular and contains three plates that are movable. One for the axis, one for the motor mount and one for the electronics mount.

Electrical parts

Electronics and wiring that are complete:

• Battery monitor
• Key switch
• High-voltage circuit with contactor

Electronics that remain:

• Speedometer
• Temperature control
• RPM monitor

Components for this have been ordered on eBay. I will be using a GPS u-blox module to get the position, from which speed can be derived. Temperature is read from motor via analog interface (KTY84-130). The RPM is read from 12V Hall-effect sensor (this will be scaled down to 5V with a resistor bridge and a Schottky diode).

Everything will be presented on an OLED display, which I need to protect from moisture somehow. Suggestions are welcome. My initial idea is to use plexiglass on the front and backfill with epoxi.

Meter concept arts

# Project Tuffe Part 2

I have now received all components (except contactor and belt-drive wheels). In the below video, you can see the motor in action.

The battery is an eBike battery with 2.4 kWh of charge. At 4 knots, this would ideally drive the boat for roughly 4 hours.

# Writeup for Snurre128

The intended solution for Snurre128 is as follows. We note that the non-linear function is almost linear, the higher-order terms have degree 5.

return v[0] ^ v[1] ^ v[2] ^ v[31] ^ \
v[1]&v[2]&v[3]&v[64]&v[123] ^ \
v[25]&v[31]&v[32]&v[126]


Whenever a higher-order term becomes non-zero, we will get non-linear behaviour. Also note that since there are two of these terms, they will cancel out with some small probability. We write $f = L(x) + P(x) + Q(x)$. Because all indices in the non-linear terms ($\{1,2,3,64,123\} \cap \{25,31,32,126\} = \emptyset$) are disjoint, we get

$\begin{array}{rcl}\mathbb{P}(P(x) + Q(x) = 1) &=& \mathbb{P}(P(x) \neq Q(x))\\ &=& \mathbb{P}(P(x) = 1) + \mathbb{P}(Q(x) = 1) - 2\cdot \mathbb{P}(P(x) = 1 \land Q(x) = 1)\\ &=& 2 \cdot 2^{-5} - 2 \cdot 2^{-10}\\& =& \frac{31}{512} \end{array}$

Let $k$ be the dimension of the state and $n$ be the length of the keystream. Note that there exists a big-ass matrix $\mathbf{G} \in \mathbb{F}_2^{k\times n}$ such that

$x\mathbf{G} + e = v \in \mathbb{F}_2^n,$

where $e$ is governed by the non-linearity of $f$. The expected Hamming weight of $e$ is

$n \cdot \mathbb{P}(P(x) + Q(x) = 1).$

It is possible to solve this using a Walsh transform, but it is rather expensive requiring $\mathcal{O}(k \cdot 2^k)$ computation. Employing BKW will significantly reduce the complexity, by transforming the problem into a problem of smaller dimension at the expense of higher error rate.

Another solution is to treat this as a decoding problem over a random code. There is support for so-called information-set decoding in sage.

from sage.coding.information_set_decoder import LeeBrickellISDAlgorithm


First we generate the generator matrix from the cipher, according to the above.

# define a code from big-ass matrix
C = codes.LinearCode(G.transpose())


The keystream is a vector $\mathbb{F}_2^n$:

v = vector(GF(2), [0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1])


Using Lee-Brickell’s algorithm (a rather simple and not the most efficient algorithm of today), we can now decode:

num_errors = 118
A = LeeBrickellISDAlgorithm(C, (num_errors, num_errors))
A.calibrate()
x = A.decode(v)
# for the record, this is the error vector
e = v + A*x


Once this is complete, we can solve a set of linear equations to unravel the state.

Also consider to read this writeup by @hellman 🙂