Background info: I have a large number (N) of particles in 3D. For all particle pairs [i,j] that have certain properties, I compute a geometrical factor c[i,j]. Then I want to sum up the contribution of all pairs [i,j] for a fixed i and call this c[i] (and repeat this procedure for all particles i).
Typically the number of relevant pairs is much smaller than N^2, so having an (N,N)-dimensional array C with the relevant information at the positions [i,j] and a lot of zeros elsewhere is fairly quick with numpy, but also very inefficient in terms of memory usage. So now I am just storing the C[i,j] for the relevant pairs and the particles forming the pairs in 1D arrays.
That is probably best illustrated in an example: Say, I have two pairs consisting of the particles (3,5) and (3,10). Schematically, my variables then look like this (double-counting intended):
i = [3,3,5,10] #list of particles i that form a pair j = [5,10,3,3] #corresponding particles j (not used in the later example) cij = [c35,c310,-c35,-c310] #(with actual numbers in reality)
Now it really boils down to finding an efficient vectorized way to rewrite the following for loop:
part_list=np.arange(N) for a in part_list: cond = np.where(i == a) ci[a] = np.sum(cij[cond])
Other solutions I have thought of, but would like to avoid:
a) Parallelize the for-loop: Not feasible b/c this is embedded in an external loop which I want to parallelize at some point.
b) Write the for-loop in C and wrap it into Python: Seems like an overkill for this (hopefully) rather simple problem.