SEC-T CTF – likeason

On this years SEC-T CTF, I made three challenges. One of the challenges, likeason[1] was given as follows:

#! /usr/bin/env python
import sys

from secret import d, n

def oracle(c, x=1):
    res = bin(pow(c, d * x, n)).count('1') & 0x1
    return int(res)

while True:
    try:
        c = [int(x) for x in sys.stdin.readline().split(" ")]
        if len(c) == 1:
            sys.stdout.write("%d\n" % oracle(c[0]))
        else:
            sys.stdout.write("%d\n" % oracle(c[0], x=c[1]))
    except Exception:
        sys.stdout.write("Nope.\n")

    sys.stdout.flush()

# e = 0x10001
# c = 32028315366572316530941187471534975579021238700122793819215559206747120150118490538115208229905399038122261293920013020124257186389163654867911967899754432511568776857320594304655042217535057811315461534790485879395513727264354857833013662037825295017832478478693519684813603327210332854320793948700855663229

As we can see, the oracle returns the parity of the bits in the decrypted plaintext. We also know that the ciphertext contained only {" ", ".", "-"}, followed by some random padding. A specific format is actually not needed, but it makes the challenge easier. The reader may notice that it has some resemblence with the Bleichenbacher oracle, but an easier way is to treat it as a regular (but unreliable) LSB (or MSB) oracle. We observe that for any unique query

\mathbb{P}(\text{overflow}~|~\text{parity flip}) = 1

but we also have that

\mathbb{P}(\text{overflow}~|~\text{no parity flip}) = 1/2.

We may assume random events, although this is not completely true.

Finding the modulus

First we need to find the modulus n. This is quite simple. Pick a number k and send (2^k)^e by sending the pair 2**k e to the oracle. We know that parity is always 1 if 2^k < n. However, when 2^k \geq n and 2^{k-1} < n, the decryption yields 2^k - n. This may cause a parity flip, but not necessary. So, what can we do about that? Randomization!

find_n

Instead, we can send 2^k + r, where r is a random number, sufficiently small (here, r < \sqrt{2^{k}} works) to not cause overflow when 2^k < n. Alternatively, we can use a single bit, i.e. r = 2^j for some j < k. In this manner, we can send queries and w.h.p detect the parity changes.

Finding the plaintext using known plaintext properties

An efficient way of determining the plaintext is as follows:

First, build a set of parities from sufficiently many queries of the form \{ \mathcal{O}(c \cdot (2^e)^i : 0 \leq i \leq M \}, for some integer M. This is mostly for caching purposes, so that no redundant calls are needed to be made.

Define an upperbound and lowerbound on the plaintext m, U and L, respectively. Then, build a search tree using the ideas from attacking LSB (or MSB oracles). If we observe a parity flip, from previous step, then we record a new lowerbound L \leftarrow \frac12(U + L) in its single child node. On the other hand, if there is no flip from previous we cannot be sure (which is the same problem as before, when determining n). Define the parity at step i to be b_i. Then, recursion will be as follows:

f(i, U, L) =   \begin{cases}     f(i+1, U, \frac12(U + L))       & \quad \text{if } b_i \neq b_{i-1}\\     f(i+1, U, \frac12(U + L)), f(i+1, \frac12(U + L), L)  & \quad \text{if } b_i = b_{i-1}.   \end{cases}

So, to be certain, we need to investigate both paths (i.e., assuming an overflow and updating the lowerbound and no overflow, hence setting a new upperbound U \leftarrow \frac12(U + L)). We can do so, and prune paths by using that the bytes for which the upperbound and lowerbound agrees need to be in the alphabet for the message (see above). This was the intended solution, and probably easiest to find.

Finding the plaintext using randomization

Will be added when I have time to describe it in detail.

[1] An anagram for snakeoil in honor to Crown Sterling.

 

Advertisements

Finding magic hashes with hashcat

During the last months, the problem of finding magic hashes for various hash functions has had a renaissance. After this tweet, the over-a-decade-old problem received new attention from a few people in the security community and, as a consequence, a flurry of new magical hashes were found.

Magical hashes have their roots in PHP. In particular, it all revolves around the improper use of the equality operator ==. PHP uses implicit typing, e.g., if a string has the format 0e[digits], then it is interpreted as the scientific notation for 0 \times 10^{\texttt{[digits]}}. This has some undesired side effects: when comparing hashes from password input it causes a small but non-negligible subset of elements that are all equal under the == operation.

The probability of a hash being magical is easy to compute if we can assume that the hash function behaves completely at random. Let the H : A \rightarrow B be a hash function with l-hexdigit long output. If we disregard all but the ones starting with 0e..., we have

\mathbb{P}(H(x) \text{ is magical}) = \left(\frac{1}{10}\right)^2 \cdot \left(\frac{10}{16}\right)^l

for an x randomly selected over all strings of any length.

inverse_prob_hashfunction

The inverse probability amounts to the number of trials (or hash-function oracle calls) required on average to find a magic hash. For instance, SHA-256, which has an output length corresponds to about 2^{50} calls.

Modifying hashcat kernels

We will take SHA-224 as an example, since it is a bit easier to find magic hashes for. First, we are going to define two masks:

The first one returns 0 if the hex representation contains nothing but decimal digits:

#define  IS_MAGIC(x)    ((x & 0x88888888) & (((x & 0x22222222) >> 1 | (x & 0x44444444) >> 2) & ((x & 0x88888888) >> 3)) << 3)

The second one does the same as the first, but it forces the first two hex digits to 0e.

#define  IS_MAGIC_H(x)  ((x & 0x00888888) & (((x & 0x00222222) >> 1 | (x & 0x00444444) >> 2) & ((x & 0x00888888) >> 3)) << 3) | ((x & 0xff000000) ^ 0x0e000000)

We add these lines to inc_hash_sha224.h. Then, we locate the function sha224_final_vector in inc_hash_sha224.cl.

DECLSPEC void sha224_final_vector (sha224_ctx_vector_t *ctx)
{
  MAYBE_VOLATILE const int pos = ctx->len & 63;  
  append_0x80_4x4 (ctx->w0, ctx->w1, ctx->w2, ctx->w3, pos ^ 3);  
  if (pos >= 56)
  {
     sha224_transform_vector (ctx->w0, ctx->w1, ctx->w2, ctx->w3, ctx->h);
     ctx->w0[0] = 0;
     ctx->w0[1] = 0;
     ctx->w0[2] = 0;
     ctx->w0[3] = 0;
     ctx->w1[0] = 0;
     ctx->w1[1] = 0;
     ctx->w1[2] = 0;
     ctx->w1[3] = 0;
     ctx->w2[0] = 0;
     ctx->w2[1] = 0;
     ctx->w2[2] = 0;
     ctx->w2[3] = 0;
     ctx->w3[0] = 0;
     ctx->w3[1] = 0;
     ctx->w3[2] = 0;
     ctx->w3[3] = 0;
  }  
  ctx->w3[2] = 0;
  ctx->w3[3] = ctx->len * 8;  
  sha224_transform_vector (ctx->w0, ctx->w1, ctx->w2, ctx->w3, ctx->h);

  // Add these lines
  ctx->h[0] = IS_MAGIC_H(ctx->h[0]); // first block starts with 0e
  ctx->h[1] = IS_MAGIC(ctx->h[1]);
  ctx->h[2] = IS_MAGIC(ctx->h[2]);
  ctx->h[3] = IS_MAGIC(ctx->h[3]);
  ctx->h[4] = IS_MAGIC(ctx->h[4]);
  ctx->h[5] = IS_MAGIC(ctx->h[5]);
  ctx->h[6] = IS_MAGIC(ctx->h[6]);
}

Finally, we need to modify the corresponding module m01300_a3_pure.cl. More specifically, we modify the following function:

KERNEL_FQ void m01300_sxx (KERN_ATTR_VECTOR ())
{
  /**
   * modifier
   */

  const u64 lid = get_local_id (0);
  const u64 gid = get_global_id (0);

  if (gid >= gid_max) return;

  /**
   * digest
   */

  const u32 search[4] =
  {
    digests_buf[digests_offset].digest_buf[DGST_R0],
    digests_buf[digests_offset].digest_buf[DGST_R1],
    digests_buf[digests_offset].digest_buf[DGST_R2],
    digests_buf[digests_offset].digest_buf[DGST_R3]
  };

  /**
   * base
   */

  const u32 pw_len = pws[gid].pw_len;

  u32x w[64] = { 0 };

  for (u32 i = 0, idx = 0; i < pw_len; i += 4, idx += 1)
  {
    w[idx] = pws[gid].i[idx];
  }

  /**
   * loop
   */

  u32x w0l = w[0];

  for (u32 il_pos = 0; il_pos < il_cnt; il_pos += VECT_SIZE)
  {
    const u32x w0r = words_buf_r[il_pos / VECT_SIZE];

    const u32x w0 = w0l | w0r;

    w[0] = w0;

    sha224_ctx_vector_t ctx;

    sha224_init_vector (&ctx);

    sha224_update_vector (&ctx, w, pw_len);

    sha224_final_vector (&ctx);

    const u32x r0 = ctx.h[0];
    const u32x r1 = ctx.h[1];
    const u32x r2 = ctx.h[2];
    const u32x r3 = ctx.h[3];
    const u32x r4 = ctx.h[4];
    const u32x r5 = ctx.h[5];
    const u32x r6 = ctx.h[6];
    
    // this is one hacky solution
    if ((r4 == 0) && (r5 == 0) && (r6 == 0)) {
       COMPARE_S_SIMD (r0, r1, r2, r3);
    }
  }
}

This assumes no vectorization. Now build hashcat. Since all magic hashes will be mapped to the all-zero sequence, we run it as follows:

$ ./hashcat.bin -m 1300 00000000000000000000000000000000000000000000000000000000 -a 3 --self-test-disable --keep-guessing

--keep-guessing causes hashcat to continue looking for hashes, even after one has been found. The --self-test-disable is needed, since we have modified how SHA-224 functions, and the testcases defined in hashcat will consequently fail.

Running it on AWS

TBA?

Results

Relatively quickly, we found several magic hashes using this modification for SHA-224:

noskba6h0:0e615700362721473273572994672194243561543298826708511055
Ssrodxcm0:0e490606683681835610577024835460055379837761934700306599
i04i19pjb:0e487547019477070898434358527128478302010609538219168998
Fuq0guvec:0e469685182044169758492939426426028878580665828076076227
Sjv2n8fjg:0e353928003962053988403389507631422927454987073208369549
L0gg4bvt5:0e730381844465899091531741380493299916620289188395999379
7v2k2to5o:0e676632093970918870688436761599383423043605011248525140

For SHA-256:

Sol7trnk00:0e57289584033733351592613162328254589214408593566331187698889096

For a more comprehensive list of hashes, refer to this Github repo.

Thanks to @0xb0bb for providing crucial help with computational resources.

On the hardness of finding minimal-weight codewords in a QC-MDPC code


Problem 1  (Knapsack)

We want to maximize the value of a subset of items such that the weight of the set is no larger than weight W.

\sum_{i \in \mathcal{S}} w_i x_i \leq W
\max \sum_{i \in \mathcal{S}} v_i x_i


Let us first study another problem. It is easy to see that this problem is at least as hard as Problem 1.


Problem 2  (Auxiliary problem)

\max x_1x_2x_3 + x_2x_5 + x_4x_6x_7 + \cdots
\|\begin{matrix} x_1 & x_2 & \cdots & x_k \end{matrix} \|_1 = m, x_i \in \{0, 1\}


We can solve Problem 1 using an oracle for Problem 2. For a knapsack instance with an item of weight w_i and weight v_i, we create a v_i terms \cdot x_1x_2\cdots x_{w_i}. The x_i must be unique for every item. This is done for every item in the knapsack. So, Problem 2 is NP-hard.

Now, we want to build some theory towards polynomials. Consider the following construction. Assign each x_i with a corresponding monomial (x - a_i). Now assume that we want to represent the term x_1 x_2 x_3. Define p(x) = x^p + 1 \in \mathbb{F}_q to be a product \prod_{i=0}^p(x-a_i). p is a prime number.

We map the term x_1 x_2 x_3 to the polynomial

Q_1(x) = p(x) \cdot \left[(x-a_1)(x-a_2)(x-a_3)\right]^{-1}

This means that Q_1(x) \cdot (x-a_1)(x-a_2)(x-a_3) = 0 \bmod p(x). To consider all terms jointly, pick a number large number d. Then, we create a polynomial using the terms Q_2(x) = p(x) \cdot \left[(x-a_2)(x-a_5)\right]^{-1}Q_3(x) = p(x) \cdot \left[(x-a_4)(x-a_6)(x-a_7)\right]^{-1} and so on. We assume there are n terms.

Then, the polynomial Q(x) is constructed as Q(x) = \sum_{i=1}^n x^{di} \cdot Q_i(x). Since d was a large number, it means that the terms of Q_i(x) and Q_{i+1}(x) will not interfere.

We now turn this problem into an instance of finding a low-weight codeword in a QC-MDPC code.


Problem 3  (Low-weight codeword of QC-MDPC code)

Let G be the generator of a QC-MDPC code. Find a non-zero codeword with minimal weight, i.e., minimize \| \mathbf u G \|_1.


There is a bijective mapping from a quasi-cyclic code to a polynomial, i.e., G(x) generates a code as well. So, Q(x) generates a quasi-cyclic code. Let us first assume we have access to an oracle for Problem 2.

Indeed, a optimal solution to Problem 2 will give rise to a codeword in the corresponding code of low weight. Assume that the optimum is x_1x_2x_3 + x_2x_5 + x_4x_6x_7 + \cdots = b. This means that u(x) is of degree m and that \|u(x) G(x)\|_1 \leq p(n-b), i.e., u(x) G(x) has at most p (n-b) non-zero coefficients.

Using an oracle for Problem 3, we find a minimal-weight codeword for the generator Q(x).

Experiments with index calculus

In index calculus, the main problem is to represent powers of g in a predefined prime-number basis. We are interested in unravel x from h = g^x\bmod p.

Normal approach

First, we find some offset hg^j = g^{x+j}\bmod p such that the factorization is B-smooth.

Using the prime-number basis (-1)^{e_1}2^{e_2}\cdots, generate a corresponding vector

\mathbf y = \begin{pmatrix}1 & 0 & 3 & 0 & \cdots & 1\end{pmatrix}

Then, we generate powers g^1, g^2, ... and check if they have the same property. The results are put into a matrix after which one performs Gaussian eliminiation over \mathbb{Z}_{p-1}.

A | \mathbf v^T = \left( \begin{array}{cccccc|cc}0 & 3 & 2 & 2 & \cdots & 1 & \mathbf{13} \\ 1 & 0 & 2 & 1 & \cdots & 0 & \mathbf{112} \\ 0 & 5 & 2 & 0 & \cdots & 0 & \mathbf{127} \\\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\ 1 & 1 & 0 & 0 & \cdots & 0 & \mathbf{67114}\end{array}\right)

\mathbf xA = \mathbf y. Then compute \mathbf x \cdot \mathbf v to find x+j.

Different approach

Almost like in the normal approach, we find some offset hg^j = g^{x+j} \bmod p such that the factorization of at least fraction of hg^j \bmod p greater than \sqrt{p} is B-smooth.

Again, Using the prime-number basis (-1)^{e_1}2^{e_2}\cdots, generate a corresponding vector

\mathbf y | \Delta = \left(\begin{array}{cccccc|c}0 & 2 & 5 & 1 & \cdots & 0 & \Delta\end{array} \right)

The vector is chosen such that \Delta corresponds to the product of primes not represented in the chosen basis. In other words, (-1)^0 \cdot 2^2 \cdot 3^5 \cdot 7^1 \cdots > \sqrt{p} for the vector above, or equivalently, \Delta < \sqrt p.

Again, we generate powers g^1, g^2, ... and check if they have the same property as above and solved as the normal approach.

A | \mathbf v^T | \mathbf d^T = \left( \begin{array}{cccccc|c|c} 1 & 0 & 3 & 0 & \cdots & 0 & \mathbf{5} & \delta_1 \\ 0 & 3 & 2 & 2 & \cdots & 1 & \mathbf{13}& \delta_2 \\ 0 & 1 & 2 & 1 & \cdots & 0 & \mathbf{65}& \delta_3 \\ \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots\\ 0 & 2 & 0 & 0 & \cdots & 0 & \mathbf{13121}& \delta_B \end{array}\right)

Find \mathbf xA = \mathbf y. Then compute \mathbf x \cdot \mathbf v to find x+j. There is a catch here. It will be correct if and only if

\prod \delta_i ^{x_i} \bmod p = \Delta.

It remains to see if this actually improves over the normal approach.

Tests with sage.

Suggestion for proof of retrievability

(Originally posted here)

Scenario

In this scenario, A and B wants to distribute segments of their data (preferably encrypted). We do not care about coverage, we are trying to maximize the amount of remotely stored data. The game is essentially that any peer, will try to minimize their own storage requirements and thus maximize other peers’.

Protocol

Two peers A and B will repeatedly negotiate upon storage space according to the follwing protocol, where A is prover and B verifier. Let S(B,A) be the set of segments owned by B and stored by A. Moreover, let s \in S(B,A) be an abritrary segment and let \text{sig}(s) denote the signature of s.

1. B picks a random string r of length n bits (equal to the length of the signature)
2. B asks A to send the closests segment from S(B,A), i.e., such that \| r - \text{sig}(s) \|_1 is minimized.
3. B verifies the segment s via the signature. Then, using the distance \| r - \text{sig}(s) \|_1, B can estimate the number of segments stored by A.

Repeat until the variance is low enough. The procedure is performed both ways.

Approximating \delta

Intuitively, we can think of S(B,A) as a random (non-linear) code \mathcal{C} over \mathbb{F}_2^n and the task is to find a minimum-weight codeword in \mathcal{C} + r = \{ c + r | c \in \mathcal C\}. Then N corresponds to the number of codewords in \mathcal{C}.

Assume that A has stored N segments. Without loss of generality, we can assume that r is the all-zero string, hence transforming the problem to finding the minimum weight of $n$ binomial variables. Let X_1, X_2, \dots, X_N \in \textsf{Bin}(n, \frac12) and Z := \min(X_1, X_2, \dots, X_n). Then with high probability and for larger values of N,

\frac n2 - \sqrt{\frac n3 \log N} \geq E(Z) \geq \frac n2 - \sqrt{\frac n2 \log N}.

The above can be obtained by Gaussian approximation and the conventional expectation of the minimum of Gaussian random variables. We omit the proof here. Of course, this can be pre-computed for reasonable values of N.

Assume there exists a peer A with infinite computing power that tries to minimize its required storage for a given minimum distance d. Done in an optimal manner, A will pick equidistant points in the space \mathbb{F}_2^n. This corresponds to a perfect error-correcting code with minimum distance $d$. The covering-coding bound states that d\leq n-k. For a perfect code, which has minimal size, we have d = n-k. Therefore, the size of the code is |\mathcal C| = 2^{n-d}, which is also the number of elements A needs to store.

A glaring problem is that A cannot arbitrarily pick points (since points must be valid signatures). The best A can do is to decide upon a perfect code \mathcal{C} (or pick one that is closest to the current set of points — which is a very hard problem). Then, A will look for points v and v' that map to the same codeword c \in \mathcal{C} and discard the point that is the farthest away from c. For larger n and smaller d, it is not realistic to achieve the covering-coding bound since there in reality are not that many shared segments. So, even given infinite computation, A will not be able to sample points until a perfect code is achieved.

The covering-coding bound serves as a crude estimation on how to set the parameters.

Simulating

In Figure 1, we see that the upper bound for expected minimum distance approximates N for values larger than 2^7 and n = 256.

rZ9aT

Assume that A send a signature s along with the segment. Let

\delta(r,s) = \|r - \text{sig}(s)\|_1 - \frac n2.

B can now use different bounds at its own choice. Assume that \delta(r,s) = 100 - \frac{256}{2}. Then, using the expression

N \approx \exp\left(-\frac 2n \cdot \delta(r,s) \cdot \text{abs}\left(\delta(r,s)\right) \right)

B determines that A most likely has at least 458 segments with high probability. Using the lower bound, A has at least 9775 segments.

So, what is all this?

The intention is to create a simplistic protocol for asserting that shared data actually is retrievable, based on a probabilistic model. This could serve as basis for shared storage, in which segments are distributed. Even if the storer does not have access to the segments it has distributed, it can use the protocol to verify that it can retrieve them without actually doing so.

Project Tuffe Part 3

Throttle design

The paradigm is minimalistic design. No unnecessary parts, it is just a handle which goes in two directions. Forward (left) is 5V, up is neutral (2.4 – 2.6 V) and back is 0V. The lever controls the 5 kΩ potentiometer via a 2:1 cog gear. The intention is to be able to control the force needed to change the thrust, but also move the applied force from the mechanical parts in the potentiometer to the handle.

throttle

Concept with power indicator

thr_pwr

Base mount

The mount is being made from thick stainless steel and is designed to allow for a few degrees of freedom. An elastic axial connection is used to reduce vibration. It is modular and contains three plates that are movable. One for the axis, one for the motor mount and one for the electronics mount.

base1

Electrical parts

Electronics and wiring that are complete:

  • Battery monitor
  • Key switch
  • High-voltage circuit with contactor

Electronics that remain:

  • Speedometer
  • Temperature control
  • RPM monitor

arduino

Components for this have been ordered on eBay. I will be using a GPS u-blox module to get the position, from which speed can be derived. Temperature is read from motor via analog interface (KTY84-130). The RPM is read from 12V Hall-effect sensor (this will be scaled down to 5V with a resistor bridge and a Schottky diode).

bridge

Everything will be presented on an OLED display, which I need to protect from moisture somehow. Suggestions are welcome. My initial idea is to use plexiglass on the front and backfill with epoxi.

Meter concept arts

meter2

pwrmeter