By default, most navis functions use only a single core (although some third-party functions used under the hood might). Distributing expensive computations across multiple cores can speed things up considerable.

As of version 0.6.0 many navis functions natively support parallel processing. This notebook will illustrate various ways to use parallelism. Importantly, navis uses pathos for multiprocessing:

$ pip install pathos -U

Running navis functions in parallel

Since 0.6.0 many functions accept a parallel=True and an (optional) n_cores parameter.

import navis

# Load example neurons
nl = navis.example_neurons()
# Without parallel processing
%time res = navis.resample_skeleton(nl, resample_to=125)
CPU times: user 3.36 s, sys: 13.4 ms, total: 3.38 s
Wall time: 3.37 s
# With parallel processing (by default uses half the available cores)
%time res = navis.resample_skeleton(nl, resample_to=125, parallel=True)
CPU times: user 134 ms, sys: 42.8 ms, total: 177 ms
Wall time: 862 ms

The same also works for neuron methods:

%time res = nl.resample(125)
CPU times: user 3.68 s, sys: 210 ms, total: 3.89 s
Wall time: 3.88 s
%time res = nl.resample(125, parallel=True)
CPU times: user 164 ms, sys: 9.11 ms, total: 173 ms
Wall time: 825 ms

Parallelizing generic functions

For non-navis function you can use NeuronList.apply to parallelize them.

First, let’s write a mock function that simply waits one second and then returns the number of nodes:

def my_func(x):
    import time
    return x.n_nodes
%time n_nodes = nl.apply(my_func)
CPU times: user 46.3 ms, sys: 6.43 ms, total: 52.8 ms
Wall time: 5.05 s
%time n_nodes = nl.apply(my_func, parallel=True)
CPU times: user 94.7 ms, sys: 5.18 ms, total: 99.9 ms
Wall time: 1.1 s