This is some work I did for a paper, but later decided to exclude. It might still be of interest to someone so I choose to publicize it here. This problem is about summing vectors in a space of dimension $d$. We are only allowed to use exactly $p$ vectors from a box containing $k$ vectors. Assume that an addition operation of $p$ vectors of dimension $d$ requires $(p-1)d$ work. If we are going to sum over all possible such vectors, we will hit the same point in space several times because it does not matter in what order we sum the vectors together. This is done by giving the vectors some order in which the should occur in the sum. For instance, adding $\vec{a}+\vec{b}$ is ok, but $\vec{b}+\vec{a}$ is not. Removing the redundant ones, gives us ${k \choose p}$ unique entries. This will take ${k \choose p}(p-1)d$ operations to compute all possible points in space that is reachable by addition of exactly $p$ vectors. However, one can do better than this. The idea is to reuse already computed values. How? We perform addition between $\vec{a}+\vec{b}$ once and then generate all possible weight $p$ vectors containing $\vec{a} + \vec{b}$. This is performed recursively, thus, we perform the addition $\vec{a} + \vec{b} + \vec{c}$ only once and so forth. The recursive algorithm is pretty straightforward (but also pretty inefficient):

function f(sum, this, left)
for all i in interval (this, p]
if left == 0:
do something with sum + vectors[i]
else
f(sum + vectors[i], i, left-1)


In order to make it efficient, one has to consider the divide-and-conquer version. This will be omitted, because it is a technical aspect. The important question is: how much does it help?

$\underbrace{\underbrace{{\vec{a}}+{\vec{b}}}_\textnormal{once}+{\vec{c}}}_\textnormal{once}+{\vec{d}}$

What is the gain of this? POW! Here’s a lemma…

$\blacksquare ~ \textnormal{\textbf{Lemma}}$ The improvement factor $\gamma$ is given by

$\gamma^{-1} \leq \frac{\sum_{j=2}^p {k-p+j \choose j}}{{k \choose p}(p-1)}$ .

Proof. First, we make a choice of all possible pairs. Since the vector has weight $p$ we cannot choose among the last $p-2$. This gives ${k-p+2 \choose 2}$ possible pairs. In the next step we check all possible triples, i.e. ${k-p+2 \choose 3}$ as we are not allowed to pick from the last $p-3$ positions. Since we have performed additions with all pairs, the addition . Repeating this idea, we end up with $\sum_{j=2}^p{k-p+2 \choose j}$ total summations.

$\blacksquare ~ \textnormal{\textbf{Corollary}}$ The asymptotic improvement factor is

$\lim_{k \rightarrow \infty}\gamma \geq (p-1)$ .

Proof. We have that

$\gamma^{-1} \leq \frac{{k \choose p}}{{k \choose p}(p-1)} + \frac{\sum_{j=2}^{p-1} {k-p+j \choose j}}{{k \choose p}(p-1)} = \frac{1}{p-1}+O\left(k^{-1}\right)$ ,

which directly gives the Corollary. Well… so what? The impact of this is that one can perform the additions in $O\left({k \choose p} d\right)$ operations. The interesting thing to note here is that if $k$ is big, we will practically never sum together more than two vectors in order to produce something that actually is sum of $p$ vectors. In other words, the summation procedure becomes independent of the number of vectors (say constant complexity $d$) that are being summed together. Only the number of weight $p$ vectors depends on $p$.