Covert communication over a noisy channel

Consider a situation in which one part Alice wants to communicate with another part Bob over a discrete memoryless channel with crossover probability p_1  while ensuring a low level of detection in the presence of a warden Willie, who observes the communication through another discrete memoryless channel with crossover probability p_2. We assume that all parts are equally computationally bounded.

Skärmavbild 2016-08-23 kl. 22.44.47

In this situation, we require that p_1 < p_2 for non-zero p_1,p_2. Let \mathbf{u} \in \mathbb{F}_2^l be a sequence of secret bits she wants to transmit.

NOTE: Alice and Bob may on forehand agree upon any strategy they desire, but the strategy is known also to Willie.

Communicating with a secret key

If Alice and Bob share a secret key k, they may use it to pick a common encoding, i.e., an error-correcting code. This encoding in turn is used to encode the message, which of course must be able to correct the errors produced by the channel. Assuming that the correction capability is below the error rate of Willie’s channel, he cannot decode. Let N bits be the length of the publicly transmitted sequence. An established result states that if \sqrt N \cdot \log N bits are shared between Alice and Bob, they can transmit \sqrt N secret bits without Willie seeing them (with high probability).

Communicating without a secret key

Now, assume that Alice and Bob do not share a common key. Alice performs the following steps:

  1. Picks a cryptographically secure random vector \mathbf r.
  2. Computes the scalar product \mathbf r \cdot \mathbf u = k.
  3. Sends the bits \mathbf r \| k over the channel.

This reduces to the problem of \textsc{Learning Parity with Noise} (LPN) and can be solved with the BKW algorithm (or similar) if p_1 is sufficiently low. In particular, if Bob receives

N = 4 \cdot \log 2 \cdot l \cdot \epsilon_1^{-2 (l+1)}

such sequences, or equivalently,

N \cdot l = 4 \cdot \log 2 \cdot l^2 \cdot \epsilon_1^{-2 (l+1)}

bits, he will be able to decode with high probability. Here we have exploited the piling-up lemma disregrading the fact that some bits in \mathbf{u} are zero and does not contribute. For some probabilities p_1<p_2 and natural number N, the information is hidden from Willie. The information rate is determined as follows: N = \mathcal{O}(l^2\cdot \epsilon_1^{-2(l+1)}), so

\mathcal{O}(\sqrt N)=\mathcal{O}(l \cdot\epsilon_1^{-l-1})\iff\mathcal{O}(\sqrt N\cdot\epsilon_1^{l+1})=\mathcal{O}(l).

This bound can be improved upon by an increase in the number of parities.

Here is a more detailed paper on a similar topic. Another paper is here.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s