MICrONS Datasets

The Allen Institute released two large connecotmics dataset:
  1. A “Cortical mm^3” of mouse visual cortex (broken into two portions, “65” and “35”)

  2. A smaller “Layer 2/3” dataset of mouse visual cortex

Both of these can be browsed via the MICrONS Explorer using neuroglancer. These data are public and thanks to the excellent cloud-volume and the caveclient libraries (developed by William Silversmith, Forrest Collman, Sven Dorkenwald, Casey Schneider-Mizell and others) we can easily fetch neurons and their connectivity.

For easier interaction, navis ships with a small interface to these datasets. To use it, we will have to make sure caveclient (and with it cloud-volume) is installed:

pip3 install caveclient cloud-volume -U

The first time you run below code, you might have to get & set a client secret. Simply follow the instructions in the terminal and when in doubt, check out the section about authentication in the docs.

Let’s get started:

import navis
import navis.interfaces.microns as mi

You will find that most functions in the interface accept a datastack parameter. At the time of writing, the available stacks are:

  • cortex65 (also called “minnie65”) is the anterior portion of the cortical mm^3 dataset

  • cortex35 (also called “minnie35”) is the (smaller) posterior portion of the cortical mm^3 dataset

  • layer 2/3 (also called “pinky100”) is the earlier, smaller cortical dataset

If not specified, the default is cortex65! Let’s start with some basic queries using caveclient:

# Initialize the client for the 65 part of cortical mm^3 (i.e. "Minnie")
client = mi.get_cave_client(datastack='cortex65')

# Fetch available annotation tables
client.materialize.get_tables()
['nucleus_detection_v0',
 'synapses_pni_2',
 'nucleus_neuron_svm',
 'proofreading_status_public_release',
 'func_unit_em_match_release',
 'allen_soma_ei_class_model_v1',
 'allen_visp_column_soma_coarse_types_v1']

These are the available public tables which we can use to fetch soma meta data. Let’s check out allen_soma_ei_class_model_v1:

ct = client.materialize.query_table('allen_soma_ei_class_model_v1')
ct.head()
id valid classification_system cell_type pt_supervoxel_id pt_root_id pt_position
0 485509 t aibs_coarse_excitatory excitatory 103588564537113366 864691136740606812 [282608, 103808, 20318]
1 113721 t aibs_coarse_excitatory excitatory 79951332685465031 864691135366988025 [110208, 153664, 23546]
2 263203 t aibs_coarse_excitatory excitatory 87694643458256575 864691135181741826 [166512, 174176, 24523]
3 456177 t aibs_coarse_excitatory excitatory 102677963354799688 864691135337690598 [275616, 135120, 24873]
4 364447 t aibs_coarse_excitatory excitatory 94449079618306553 864691136883828334 [216064, 166800, 15025]
ct.cell_type.unique()
array(['excitatory', 'inhibitory'], dtype=object)

Looks like at this point there is only a rather coarse public classification. Nevertheless, let’s fetch a couple excitatory and inhibitory neurons:

# Fetch proof-reading status
pr = client.materialize.query_table('proofreading_status_public_release')
pr.head()
id valid pt_supervoxel_id pt_root_id valid_id status_dendrite status_axon pt_position
0 1 t 89529934389098311 864691136296964635 864691136296964635 extended non [179808, 216672, 23361]
1 2 t 90584228533843146 864691136311986237 864691136311986237 extended non [187840, 207232, 22680]
2 3 t 89528353773943370 864691135355207119 864691135355207119 extended non [180016, 204592, 22798]
3 4 t 91077153340676495 864691135355207375 864691135355207375 extended non [191424, 209888, 22845]
4 5 t 88897234233461709 864691136422983727 864691136422983727 extended non [175248, 220944, 23561]
# Subset to those neurons that have been proof read
proofread = pr[pr.status_dendrite.isin(['extented', 'clean']) & pr.status_axon.isin(['extented', 'clean'])].pt_root_id.values

ct = ct[ct.pt_root_id.isin(proofread)]
ct.shape
(167, 7)
# Pick 20 "root IDs" (sometimes also called segment IDs) each
inh_ids = ct[ct.cell_type == 'inhibitory'].pt_root_id.values[: 20]
exc_ids = ct[ct.cell_type == 'excitatory'].pt_root_id.values[: 20]

Next, we will fetch the meshes including the synapses for these neurons:

# Fetch those neurons
inh = mi.fetch_neurons(inh_ids, lod=2, with_synapses=False)
exc = mi.fetch_neurons(exc_ids, lod=2, with_synapses=False)

# Inspect
inh
<class 'navis.core.neuronlist.NeuronList'> containing 20 neurons (270.6MiB)
type name id units n_vertices n_faces
0 navis.MeshNeuron None 864691135994731946 1 nanometer 118433 238637
1 navis.MeshNeuron None 864691135974528623 1 nanometer 145727 293750
... ... ... ... ... ... ...
18 navis.MeshNeuron None 864691136136830589 1 nanometer 54995 111098
19 navis.MeshNeuron None 864691135428608048 1 nanometer 397265 799150

These neurons are fairly large despite us using a large lod (level of detail, higher = coarser). For visualization of such large meshes it can be useful to simplify them a little. For this, you need either open3d (pip3 install open3d), pymeshlab (pip3 install pymeshlab) or Blender 3D on your computer.

# Reduce face counts to 1/3 of the original
inh_ds = navis.simplify_mesh(inh, F=1/3)
exc_ds = navis.simplify_mesh(exc, F=1/3)

# Inspect (note the lower face/vertex counts)
inh_ds
<class 'navis.core.neuronlist.NeuronList'> containing 20 neurons (69.4MiB)
type name id units n_vertices n_faces
0 navis.MeshNeuron None 864691135994731946 1 nanometer 40887 79545
1 navis.MeshNeuron None 864691135974528623 1 nanometer 50585 97915
... ... ... ... ... ... ...
18 navis.MeshNeuron None 864691135446864980 1 nanometer 108886 213560
19 navis.MeshNeuron None 864691135428608048 1 nanometer 136145 266382

Let’s visualize the neurons before running analyses

# Create some colors: reds for excitatory, blue for inhibitory
import seaborn as sns
colors = {n.id: sns.color_palette('Reds', 5)[i] for i, n in enumerate(exc[:5])}
colors.update({n.id: sns.color_palette('Blues', 5)[i] for i, n in enumerate(inh[:5])})

# Plot the first 5 neurons each
fig = navis.plot3d([inh_ds[:5], exc_ds[:5]], color=colors)