Berlekamp-Massey Algorithm Explained

The Berlekamp-Massey algorithm, (or as abbreviated, the BM algorithm) solves the following problem:

Given a sequence s =s_1,s_2,\cdots,s_l of length l, find a shortest LFSR, such that it produces the sequence s in the first l symbols.

For instance, assuming a binary sequence, s^8 = 10011101 (the 8 denotes the length of the sequence) is produced by an LFSR with connection polynomial C(D) = 1 + D + D^3 over \mathbb{F}_2[D]. This the shortest length of an LFSR that reproduces that sequence. We should note here that we totally ignore what it produces after the 8th symbol – it is not important – we only care about the first 8 symbols.

The main idea of the BM algorithm is that we “make a guess” about what LFSR we are having. We then simulate the LFSR up to symbol i and see if symbol i is correct – if it is not, we alter the LFSR connection polynomial so that it produces a correct symbol. A very important remark is that our correction cannot cause previous symbols to be incorrect, because then we would have to start all over again. This makes the BM algorithm a so-called exact greedy algorithm (it produces an optimal solution that, however, needs not to be unique unless the length of the solution is less than or equal to half the length of the sequence).

In essence, we can summarize the algorithm into a single for-loop:

\blacksquare ~ \textnormal{\textbf{Algorithm} \emph{BM algorithm}}
Input: sequence s

  1. \mathbf{for} ~i=1~\textnormal{to}~l
    1. Simulate LFSR up to symbol i. Denote the sequence s'.
    2. \mathbf{if} s\neq s'~\mathbf{then} correct the LFSR so that it agrees.

Let us focus on 1.2. The false case of the if-statement is pretty obvious – if the LFSR works, do not fix it.

First of all, note the following: each symbol in a sequence produced by an LFSR must fulfill a linear relation with some other symbols in the sequence. This linear relation is given by the connection polynomial.

Consider the LFSR given by C(D) = 1 + D + D^3, from the example above. Each symbol must fulfill the equation s'_j = s'_{j-1} + s'_{j-3} \iff s'_j + s'_{j-1} + s'_{j-3} = 0, except for the symbols that is the starting state of the LFSR. From this, we can define the discrepancy function \delta_j = s_{j-3} + s_{j-2} + s_{j}, where s_j = 0 for all j < 1.

Assume that we have a determined the LFSR that produces the correct sequence up to symbol i. This means we have a series for discrepancies, namely (\delta_j)_{j=0}^{i} = (0,0,\cdots,0,1). Clearly, the last one needs to eliminated. How do we do this?

We could, theoretically, change the discrepancy function (and, thus, the connection polynomial correspondingly) to \delta_j = f(j) + s_{j-3} + s_{j-2} + s_{j}, where f(x) = 1 if x = i and f(x) = 0 otherwise. How do we construct such a function (that also is of lowest possible degree)?

Assume that we have made this correction before. This means, that at some point, we had an index m, where the discrepancy function series was (\delta'_j)_{j=0}^{m} = (0,0,\cdots,0,1).

Let’s say that we add this series onto (\delta_j)_{j=0}^{i}, but we delay it i-m steps. This means that (\delta_j)_{j=0}^{i} + (\delta'_j)_{j=i-m}^{i} =(0,0,\cdots,0,0). Here, we have found a function (\delta'_j)_{j=i-m}^{i} that agree with the properties of f(x).

Here’s a catch: we might have had several discrepancies and therefore several equations for the discrepancy. Which equation should we use? First note that we need to delay the function i-m steps if we use the discrepancy at index m. Also note that delay is equal to complexity, which we, in turn, want to reduce. Therefore, we want to pick m as large as possible, as this, by induction assumption will be minimal.

Note that by changing discrepancy function, we change the connection polynomial. Thus,

(\hat{\delta}_j)_{j=0}^{i} = (\delta_j)_{j=0}^{i} + (\delta'_j)_{j=i-m}^{i} \iff \hat{C}(D) = C(D) + D^{i-m}C'(D).

And this is the correction step of the BM algorithm.

Note: For the BM algorithm to work on arbitrary field \mathbb{F}_q, instead of adding the previous discrepancy symbol \alpha, we have to match it with the current discrepancy \beta.

The following fixes the problem:

(\hat{\delta}_j)_{j=0}^{i} = (\delta_j)_{j=0}^{i} - \beta\alpha^{-1}(\delta'_j)_{j=i-m}^{i} \iff \hat{C}(D) = C(D) - \alpha^{-1}\beta D^{i-m}C'(D).

Thus, the correction goes \beta - \beta\alpha^{-1}\alpha = \beta - \beta = 0.

Related: Correlation attacks. See for instance this paper.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s