Tutorial 2: βHowβ models#
Week 1, Day 1: Model Types
By Neuromatch Academy
Content creators: Matt Laporte, Byron Galbraith, Konrad Kording
Content reviewers: Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom, Ella Batty
Production editors: Gagana B, Spiros Chavlis
Tutorial Objectives#
Estimated timing of tutorial: 45 minutes
This is tutorial 2 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore models that can potentially explain how the spiking data we have observed is produced
To understand the mechanisms that give rise to the neural data we save in Tutorial 1, we will build simple neuronal models and compare their spiking response to real data. We will:
Write code to simulate a simple βleaky integrate-and-fireβ neuron model
Make the model more complicated β but also more realistic β by adding more physiologically-inspired details
Setup#
Install and import feedback gadget#
Show code cell source
# @title Install and import feedback gadget
!pip3 install vibecheck datatops --quiet
from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
return DatatopsContentReviewContainer(
"", # No text prompt
notebook_section,
{
"url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
"name": "neuromatch_cn",
"user_key": "y1x3mpx5",
},
).render()
feedback_prefix = "W1D1_T2"
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
Figure Settings#
Show code cell source
# @title Figure Settings
import logging
logging.getLogger('matplotlib.font_manager').disabled = True
import ipywidgets as widgets # interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/main/nma.mplstyle")
Plotting Functions#
Show code cell source
# @title Plotting Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins."""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, color='r', linestyle='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
if ymax == ymin:
ymax = None
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=False, axis='x', tight=True)
def plot_neuron_stats(v, spike_times):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# membrane voltage trace
ax1.plot(v[0:100])
ax1.set(xlabel='Time', ylabel='Voltage')
# plot spike events
for x in spike_times:
if x >= 100:
break
ax1.axvline(x, color='red')
# ISI distribution
if len(spike_times)>1:
isi = np.diff(spike_times)
n_bins = np.arange(isi.min(), isi.max() + 2) - .5
counts, bins = np.histogram(isi, n_bins)
vlines = []
if len(isi) > 0:
vlines = [np.mean(isi)]
xmax = max(20, int(bins[-1])+5)
histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={
'xlabel': 'Inter-spike interval',
'ylabel': 'Number of intervals',
'xlim': [0, xmax]
})
else:
ax2.set(xlabel='Inter-spike interval',
ylabel='Number of intervals')
plt.show()
βHowβ models#
Video 1: βHowβ models#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_How_models_Video")
Section 1: The Linear Integrate-and-Fire Neuron#
Note that weβll be using some math to describe our leaky integrate-and-fire neurons. If you need to quickly check the notation (i.e. what a certain variable name stands for), you can look at the notation section after the summary!
How does a neuron spike?
A neuron charges and discharges an electric field across its cell membrane. The state of this electric field can be described by the membrane potential. The membrane potential rises due to excitation of the neuron, and when it reaches a threshold a spike occurs. The potential resets, and must rise to a threshold again before the next spike occurs.
One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential
Once
Here, we will take the starting and threshold potentials as
Note that we define the membrane potential
The proposed model is a 1D simplification. There are many details we could add to it, to preserve different parts of the complex structure and dynamics of a real neuron. If we were interested in small or local changes in the membrane potential, our 1D simplification could be a problem. However, weβll assume an idealized βpointβ neuron model for our current purpose.
Spiking Inputs#
Given our simplified model for the neuron dynamics, we still need to consider what form the input
Unlike in the simple example above, where
Weβll assume the input current
Given no other information about the input neurons, we will also assume that the distribution has a mean (i.e. mean rate, or number of spikes received per timestep), and that the spiking events of the input neuron(s) are independent in time. Are these reasonable assumptions in the context of real neurons?
A suitable distribution given these assumptions is the Poisson distribution, which weβll use to model
where
Coding Exercise 1: Compute #
For your first exercise, you will write the code to compute the change in voltage dv
in the lif_neuron
function below. The value of rate
.
The scipy.stats
package is a great resource for working with and sampling from various probability distributions. We will use the scipy.stats.poisson
class and its method rvs
to produce Poisson-distributed random samples. In this tutorial, we have imported this package with the alias stats
, so you should refer to it in your code as stats.poisson
.
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
# Precompute Poisson samples for speed
exc = stats.poisson(rate).rvs(n_steps)
# Initialize voltage and spike storage
v = np.zeros(n_steps)
spike_times = []
################################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Exercise: compute the change in membrane potential")
################################################################################
# Loop over steps
for i in range(1, n_steps):
# Update v
dv = ...
v[i] = v[i-1] + dv
# If spike happens, reset voltage and record
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
# Model LIF neuron
v, spike_times = lif_neuron()
# Visualize
plot_neuron_stats(v, spike_times)
Example output:

Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Compute_dVm_Exercise")
Interactive Demo 1: Linear-IF neuron#
Like last time, you can now explore how various parameters of the LIF model influence the ISI distribution. Specifically, you can vary alpha
, which is input scaling factor, and rate
, which is the mean rate of incoming spikes.
What is the spiking pattern of this model?
What effect does raising or lowering alpha have?
What effect does raising or lowering the rate have?
Does the distribution of ISIs ever look like what you observed in the data in Tutorial 1?
You donβt need to worry about how the code works β but you do need to run the cell to enable the sliders.
Show code cell source
# @markdown You don't need to worry about how the code works β but you do need to **run the cell** to enable the sliders.
def _lif_neuron(n_steps=1000, alpha=0.01, rate=10):
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(
alpha=widgets.FloatLogSlider(0.01, min=-2, max=-1),
rate=widgets.IntSlider(10, min=5, max=20)
)
def plot_lif_neuron(alpha=0.01, rate=10):
v, spike_times = _lif_neuron(2000, alpha, rate)
plot_neuron_stats(v, spike_times)
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Linear_IF_neuron_Interactive_Demo_and_Discussion")
Video 2: Linear-IF models#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Linear_IF_models_Video")
Section 2: Inhibitory signals#
Estimated timing to here from start of tutorial: 20 min
Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes. However, our ISI histogram doesnβt look much like empirical ISI histograms seen in Tutorial 1, which had an exponential-like shape. What is our model neuron missing, given that it doesnβt behave like a real neuron?
In the previous model we only considered excitatory behavior β the only way the membrane potential could decrease was upon a spike event. We know, however, that there are other factors that can drive
where
We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable:
where
Coding Exercise 2: Compute with inhibitory signals#
For your second exercise, you will again write the code to compute the change in voltage dv
below.
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
###############################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Exercise: compute the change in membrane potential")
################################################################################
for i in range(1, n_steps):
dv = ...
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
# Model LIF neuron
v, spike_times = lif_neuron_inh()
# Visualize
plot_neuron_stats(v, spike_times)
Example output:

Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Compute_dVm_with_inhibitory_signals_Exercise")
Interactive Demo 2: LIF + inhibition neuron#
As in Interactive Demo 1, you can play with the parameters of the input to our LIF neuron and visualize what happens. Here, in addition to controlling alpha
, which scales the input, you can also control beta
, which is a leakage factor on the voltage, exc_rate
, which is the mean rate of the excitatory presynaptic neurons, and inh_rate
, which is the mean rate of the inhibitory presynaptic neurons.
What effect does raising the excitatory rate have?
What effect does raising the inhibitory rate have?
What if you raise both the excitatory and inhibitory rate?
Does the distribution of ISIs ever look like what you observed in the data in Tutorial 1?
#
Run the cell to enable the sliders.
Show code cell source
#@title
#@markdown **Run the cell** to enable the sliders.
def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(alpha=widgets.FloatLogSlider(0.5, min=-1, max=1),
beta=widgets.FloatLogSlider(0.1, min=-1, max=0),
exc_rate=widgets.IntSlider(12, min=10, max=20),
inh_rate=widgets.IntSlider(12, min=10, max=20))
def plot_lif_neuron(alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
v, spike_times = _lif_neuron_inh(2000, alpha, beta, exc_rate, inh_rate)
plot_neuron_stats(v, spike_times)
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_LIF_and_inhibition_neuron_Interactive_Demo_and_Discussion")
Video 3: LIF + inhibition#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_LIF_and_inhibition_Video")
Section 3: Reflecting on how models#
Estimated timing to here from start of tutorial: 35 min
Think! 3: Reflecting on how models#
Please discuss the following questions for around 10 minutes with your group:
Have you seen how models before?
Have you ever done one?
Why are how models useful?
When are they possible? Does your field have how models?
What do we learn from constructing them?
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Reflecting_on_how_models_Discussion")
Summary#
Estimated timing of tutorial: 45 minutes
In this tutorial we gained some intuition for the mechanisms that produce the observed behavior in our real neural data. First, we built a simple neuron model with excitatory input and saw that its behavior, measured using the ISI distribution, did not match our real neurons. We then improved our model by adding leakiness and inhibitory input. The behavior of this balanced model was much closer to the real neural data.
Notation#
Bonus#
Bonus Section 1: Why do neurons spike?#
A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.
The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.
Bonus Section 2: The LIF Model Neuron#
The full equation for the LIF neuron is
where
In our above examples we set many of these parameters to convenient values (