{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "id": "view-in-github" }, "source": [ "\"Open   \"Open" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Bonus Tutorial: Spike-timing dependent plasticity (STDP)\n", "\n", "**Week 2, Day 3: Biological Neuron Models**\n", "\n", "**By Neuromatch Academy**\n", "\n", "**Content creators:** Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar\n", "\n", "**Content reviewers:** Spiros Chavlis, Lorenzo Fontolan, Richard Gao, Matthew Krause\n", "\n", "**Production editors:** Gagana B, Spiros Chavlis" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "---\n", "# Tutorial Objectives\n", "In this tutorial, we will focus on building a model of a synapse in which its synaptic strength changes as a function of the relative timing (i.e., time difference) between the spikes of the presynaptic and postsynaptic neurons, respectively. This change in the synaptic weight is known as **spike-timing dependent plasticity (STDP)**.\n", "\n", "Our goals for this tutorial are to:\n", "\n", "- build a model of synapse that show STDP\n", "\n", "- study how correlations in input spike trains influence the distribution of synaptic weights\n", "\n", "Towards these goals, we will model the presynaptic input as Poisson type spike trains. The postsynaptic neuron will be modeled as an LIF neuron (see Tutorial 1).\n", "\n", "Throughout this tutorial, we assume that a single postsynaptic neuron is driven by $N$ presynaptic neurons. That is, there are $N$ synapses, and we will study how their weights depend on the statistics or the input spike trains and their timing with respect to the spikes of the postsynaptic neuron.\n" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "---\n", "# Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install and import feedback gadget\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Install and import feedback gadget\n", "\n", "!pip3 install vibecheck datatops --quiet\n", "\n", "from vibecheck import DatatopsContentReviewContainer\n", "def content_review(notebook_section: str):\n", " return DatatopsContentReviewContainer(\n", " \"\", # No text prompt\n", " notebook_section,\n", " {\n", " \"url\": \"https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab\",\n", " \"name\": \"neuromatch_cn\",\n", " \"user_key\": \"y1x3mpx5\",\n", " },\n", " ).render()\n", "\n", "\n", "feedback_prefix = \"W2D3_T4_Bonus\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "both", "execution": {} }, "outputs": [], "source": [ "# Import libraries\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Figure Settings\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Figure Settings\n", "import logging\n", "logging.getLogger('matplotlib.font_manager').disabled = True\n", "\n", "import ipywidgets as widgets # interactive display\n", "%config InlineBackend.figure_format='retina'\n", "# use NMA plot style\n", "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/main/nma.mplstyle\")\n", "my_layout = widgets.Layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting functions\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Plotting functions\n", "def my_raster_plot(range_t, spike_train, n):\n", " \"\"\"Generates poisson trains\n", "\n", " Args:\n", " range_t : time sequence\n", " spike_train : binary spike trains, with shape (N, Lt)\n", " n : number of Poisson trains plot\n", "\n", " Returns:\n", " Raster_plot of the spike train\n", " \"\"\"\n", "\n", " # Find the number of all the spike trains\n", " N = spike_train.shape[0]\n", "\n", " # n should be smaller than N:\n", " if n > N:\n", " print('The number n exceeds the size of spike trains')\n", " print('The number n is set to be the size of spike trains')\n", " n = N\n", "\n", " # Raster plot\n", " i = 0\n", " while i <= n:\n", " if spike_train[i, :].sum() > 0.:\n", " t_sp = range_t[spike_train[i, :] > 0.5] # spike times\n", " plt.plot(t_sp, i * np.ones(len(t_sp)), 'k|', ms=10, markeredgewidth=2)\n", " i += 1\n", " plt.xlim([range_t[0], range_t[-1]])\n", " plt.ylim([-0.5, n + 0.5])\n", " plt.xlabel('Time (ms)')\n", " plt.ylabel('Neuron ID')\n", " plt.show()\n", "\n", "\n", "def my_example_P(pre_spike_train_ex, pars, P):\n", " \"\"\"Generates two plots (raster plot and LTP vs time plot)\n", "\n", " Args:\n", " pre_spike_train_ex : spike-train\n", " pars : dictionary with the parameters\n", " P : LTP ratio\n", "\n", " Returns:\n", " my_example_P returns a rastert plot (top),\n", " and a LTP ratio across time (bottom)\n", " \"\"\"\n", " spT = pre_spike_train_ex\n", " plt.figure(figsize=(7, 6))\n", " plt.subplot(211)\n", " color_set = ['red', 'blue', 'black', 'orange', 'cyan']\n", " for i in range(spT.shape[0]):\n", " t_sp = pars['range_t'][spT[i, :] > 0.5] # spike times\n", " plt.plot(t_sp, i*np.ones(len(t_sp)), '|',\n", " color=color_set[i],\n", " ms=10, markeredgewidth=2)\n", " plt.xlabel('Time (ms)')\n", " plt.ylabel('Neuron ID')\n", " plt.xlim(0, 200)\n", "\n", " plt.subplot(212)\n", " for k in range(5):\n", " plt.plot(pars['range_t'], P[k, :], color=color_set[k], lw=1.5)\n", " plt.xlabel('Time (ms)')\n", " plt.ylabel('P(t)')\n", " plt.xlim(0, 200)\n", "\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "\n", "def mySTDP_plot(A_plus, A_minus, tau_stdp, time_diff, dW):\n", " plt.figure()\n", " plt.plot([-5 * tau_stdp, 5 * tau_stdp], [0, 0], 'k', linestyle=':')\n", " plt.plot([0, 0], [-A_minus, A_plus], 'k', linestyle=':')\n", "\n", " plt.plot(time_diff[time_diff <= 0], dW[time_diff <= 0], 'ro')\n", " plt.plot(time_diff[time_diff > 0], dW[time_diff > 0], 'bo')\n", "\n", " plt.xlabel(r't$_{\\mathrm{pre}}$ - t$_{\\mathrm{post}}$ (ms)')\n", " plt.ylabel(r'$\\Delta$W', fontsize=12)\n", " plt.title('Biphasic STDP', fontsize=12, fontweight='bold')\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Helper functions\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Helper functions\n", "\n", "def default_pars_STDP(**kwargs):\n", " pars = {}\n", "\n", " # typical neuron parameters\n", " pars['V_th'] = -55. # spike threshold [mV]\n", " pars['V_reset'] = -75. # reset potential [mV]\n", " pars['tau_m'] = 10. # membrane time constant [ms]\n", " pars['V_init'] = -65. # initial potential [mV]\n", " pars['V_L'] = -75. # leak reversal potential [mV]\n", " pars['tref'] = 2. # refractory time (ms)\n", "\n", " # STDP parameters\n", " pars['A_plus'] = 0.008 # magnitude of LTP\n", " pars['A_minus'] = pars['A_plus'] * 1.10 # magnitude of LTD\n", " pars['tau_stdp'] = 20. # STDP time constant [ms]\n", "\n", " # simulation parameters\n", " pars['T'] = 400. # Total duration of simulation [ms]\n", " pars['dt'] = .1 # Simulation time step [ms]\n", "\n", " # external parameters if any\n", " for k in kwargs:\n", " pars[k] = kwargs[k]\n", "\n", " pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]\n", "\n", " return pars\n", "\n", "\n", "def Poisson_generator(pars, rate, n, myseed=False):\n", " \"\"\"Generates poisson trains\n", "\n", " Args:\n", " pars : parameter dictionary\n", " rate : noise amplitute [Hz]\n", " n : number of Poisson trains\n", " myseed : random seed. int or boolean\n", "\n", " Returns:\n", " pre_spike_train : spike train matrix, ith row represents whether\n", " there is a spike in ith spike train over time\n", " (1 if spike, 0 otherwise)\n", " \"\"\"\n", "\n", " # Retrieve simulation parameters\n", " dt, range_t = pars['dt'], pars['range_t']\n", " Lt = range_t.size\n", "\n", " # set random seed\n", " if myseed:\n", " np.random.seed(seed=myseed)\n", " else:\n", " np.random.seed()\n", "\n", " # generate uniformly distributed random variables\n", " u_rand = np.random.rand(n, Lt)\n", "\n", " # generate Poisson train\n", " poisson_train = 1. * (u_rand < rate * (dt / 1000.))\n", "\n", " return poisson_train" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "---\n", "# Section 1: Spike-timing dependent plasticity (STDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Video 1: STDP\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "remove-input" ] }, "outputs": [], "source": [ "# @title Video 1: STDP\n", "from ipywidgets import widgets\n", "from IPython.display import YouTubeVideo\n", "from IPython.display import IFrame\n", "from IPython.display import display\n", "\n", "\n", "class PlayVideo(IFrame):\n", " def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n", " self.id = id\n", " if source == 'Bilibili':\n", " src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n", " elif source == 'Osf':\n", " src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n", " super(PlayVideo, self).__init__(src, width, height, **kwargs)\n", "\n", "\n", "def display_videos(video_ids, W=400, H=300, fs=1):\n", " tab_contents = []\n", " for i, video_id in enumerate(video_ids):\n", " out = widgets.Output()\n", " with out:\n", " if video_ids[i][0] == 'Youtube':\n", " video = YouTubeVideo(id=video_ids[i][1], width=W,\n", " height=H, fs=fs, rel=0)\n", " print(f'Video available at https://youtube.com/watch?v={video.id}')\n", " else:\n", " video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n", " height=H, fs=fs, autoplay=False)\n", " if video_ids[i][0] == 'Bilibili':\n", " print(f'Video available at https://www.bilibili.com/video/{video.id}')\n", " elif video_ids[i][0] == 'Osf':\n", " print(f'Video available at https://osf.io/{video.id}')\n", " display(video)\n", " tab_contents.append(out)\n", " return tab_contents\n", "\n", "\n", "video_ids = [('Youtube', 'luHL-mO5S1w'), ('Bilibili', 'BV1AV411r75K')]\n", "tab_contents = display_videos(video_ids, W=730, H=410)\n", "tabs = widgets.Tab()\n", "tabs.children = tab_contents\n", "for i in range(len(tab_contents)):\n", " tabs.set_title(i, video_ids[i][0])\n", "display(tabs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_STDP_Video\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Model of STDP\n", "\n", "The phenomenology of STDP is generally described as a biphasic exponentially decaying function. That is, the instantaneous change in weights is given by:\n", "\n", "\\begin{eqnarray}\n", "& \\Delta W &=& A_+ e^{ (t_{pre}-t_{post})/\\tau_+} &\\text{if} \\hspace{5mm} t_{post} > t_{pre}& \\\\\n", "& \\Delta W &=& -A_- e^{- (t_{pre}-t_{post})/\\tau_-} &\\text{if} \\hspace{5mm} t_{post} < t_{pre}&\n", "\\end{eqnarray}\n", "\n", "where $\\Delta W$ denotes the change in the synaptic weight, $A_+$ and $A_-$ determine the maximum amount of synaptic modification (which occurs when the timing difference between presynaptic and postsynaptic spikes is close to zero), $\\tau_+$ and $\\tau_-$ determine the ranges of pre-to-postsynaptic interspike intervals over which synaptic strengthening or weakening occurs. Thus, $\\Delta W > 0 $ means that postsynaptic neuron spikes after the presynaptic neuron.\n", "\n", "This model captures the phenomena that repeated occurrences of presynaptic spikes within a few milliseconds **before** postsynaptic action potentials lead to long-term potentiation (LTP) of the synapse, whereas repeated occurrences of presynaptic spikes **after** the postsynaptic ones lead to long-term depression (LTD) of the same synapse.\n", "\n", "The latency between presynaptic and postsynaptic spike ($\\Delta t$) is defined as:\n", "\n", "\\begin{equation}\n", "\\Delta t = t_{\\rm pre} - t_{\\rm post}\n", "\\end{equation}\n", "\n", "where $t_{\\rm pre}$ and $t_{\\rm post}$ are the timings of the presynaptic and postsynaptic spikes, respectively." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Complete the following code to set the STDP parameters and plot the STDP function. Note that for simplicity, we assume **$\\tau_{+} = \\tau_{-} = \\tau_{\\rm stdp}$**." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Coding Exercise 1A: Compute the STDP changes $\\Delta W$\n", "\n", "Note, as shown above, the expression of $\\Delta W$ is different for $t_{post}>t_{pre}$ and $t_{post} 0] = ...\n", "\n", " return dW\n", "\n", "\n", "pars = default_pars_STDP()\n", "# Get parameters\n", "A_plus, A_minus, tau_stdp = pars['A_plus'], pars['A_minus'], pars['tau_stdp']\n", "# pre_spike time - post_spike time\n", "time_diff = np.linspace(-5 * tau_stdp, 5 * tau_stdp, 50)\n", "\n", "dW = Delta_W(pars, A_plus, A_minus, tau_stdp)\n", "mySTDP_plot(A_plus, A_minus, tau_stdp, time_diff, dW)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial4_Solution_54b83ed8.py)\n", "\n", "*Example output:*\n", "\n", "Solution hint\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Compute_STDP_changes_Exercise\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Keeping track of pre- and postsynaptic spikes\n", "Since a neuron will receive numerous presynaptic spike inputs, in order to implement STDP by taking into account different synapses, we first have to keep track of the pre- and postsynaptic spike times throughout the simulation.\n", "\n", "A convenient way to do this is to define the following equation for each postsynaptic neuron:\n", "\n", "\\begin{equation}\n", "\\tau_{-} \\frac{dM}{dt} = -M\n", "\\end{equation}\n", "\n", "and whenever the postsynaptic neuron spikes,\n", "\n", "\\begin{equation}\n", "M(t) = M(t) - A_{-}\n", "\\end{equation}\n", "\n", "This way $M(t)$ tracks the number of postsynaptic spikes over the timescale $\\tau_{-}$.\n", "\n", "Similarly, for each presynaptic neuron, we define:\n", "\n", "\\begin{equation}\n", "\\tau_{+} \\frac{dP}{dt} = -P\n", "\\end{equation}\n", "\n", "and whenever there is spike on the presynaptic neuron,\n", "\n", "\\begin{equation}\n", "P(t) = P(t) + A_{+}\n", "\\end{equation}\n", "\n", "The variables $M(t)$ and $P(t)$ are very similar to the equations for the synaptic conductances, i.e., $g_{i}(t)$, except that they are used to keep track of pre- and postsynaptic spike times on a much longer timescale. Note that, $M(t)$ is always negative, and $P(t)$ is always positive. You can probably already guess that $M$ is used to induce LTD and $P$ to induce LTP because they are updated by $A_{-}$ and $A_{+}$, respectively.\n", "\n", "**Important note:** $P(t)$ depends on the presynaptic spike times. If we know the presynaptic spike times, $P$ can be generated before simulating the postsynaptic neuron and the corresponding STDP weights." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Visualization of $P$\n", "Here, we will consider a scenario in which there is a single postsynaptic neuron connected to $N$ presynaptic neurons.\n", "\n", "For instance, we have one postsynaptic neuron which receives Poisson type spiking inputs from five presynaptic neurons.\n", "\n", "We can simulate $P$ for each one of the presynaptic neurons." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Coding Exercise 1B: Compute $dP$\n", "\n", "Here, yet again, we use the Euler scheme, which has been introduced several times in the previous tutorials.\n", "\n", "Similar to the dynamics of the membrane potential in the LIF model, in a time step $dt$, $P(t)$ will decrease by an amount of $\\displaystyle{\\frac{dt}{\\tau_+}P(t)}$. Whereas, if a presynaptic spike arrives, $P(t)$ will instantaneously increase by an amount of $A_+$. Therefore,\n", "\n", "
\n", "\n", "\\begin{equation}\n", "dP = -\\displaystyle{\\frac{dt}{\\tau_+}P[t]} + A_+\\cdot \\text{sp_or_not}[t+dt]\n", "\\end{equation}\n", "\n", "
\n", "\n", "where the $\\text{sp_or_not}$ is a binary variable, i.e., $\\text{sp_or_not}=1$ if there is a spike within $dt$, and $\\text{sp_or_not}=0$ otherwise." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "def generate_P(pars, pre_spike_train_ex):\n", " \"\"\"\n", " track of pre-synaptic spikes\n", "\n", " Args:\n", " pars : parameter dictionary\n", " pre_spike_train_ex : binary spike train input from\n", " presynaptic excitatory neuron\n", "\n", " Returns:\n", " P : LTP ratio\n", " \"\"\"\n", "\n", " # Get parameters\n", " A_plus, tau_stdp = pars['A_plus'], pars['tau_stdp']\n", " dt, range_t = pars['dt'], pars['range_t']\n", " Lt = range_t.size\n", "\n", " # Initialize\n", " P = np.zeros(pre_spike_train_ex.shape)\n", " for it in range(Lt - 1):\n", " #######################################################################\n", " ## TODO for students: compute dP, then remove the NotImplementedError #\n", " # Fill out when you finish the function\n", " raise NotImplementedError(\"Student exercise: compute P, the change of presynaptic spike\")\n", " #######################################################################\n", " # Calculate the delta increment dP\n", " dP = ...\n", " # Update P\n", " P[:, it + 1] = P[:, it] + dP\n", "\n", " return P\n", "\n", "\n", "pars = default_pars_STDP(T=200., dt=1.)\n", "pre_spike_train_ex = Poisson_generator(pars, rate=10, n=5, myseed=2020)\n", "P = generate_P(pars, pre_spike_train_ex)\n", "my_example_P(pre_spike_train_ex, pars, P)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial4_Solution_4e3afedb.py)\n", "\n", "*Example output:*\n", "\n", "Solution hint\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Compute_dP_Exercise\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "---\n", "# Section 2: Implementation of STDP\n", "\n", "Finally, to implement STDP in spiking networks, we will change the value of the peak synaptic conductance based on the presynaptic and postsynaptic timing, thus using the variables $P(t)$ and $M(t)$.\n", "\n", "Each synapse $i$ has its own peak synaptic conductance ($\\bar g_i$), which may vary between $[0, \\bar g_{max}]$, and will be modified depending on the presynaptic and postsynaptic timing.\n", "\n", "* When the $i_{th}$ presynaptic neuron elicits a spike, its corresponding peak conductance is updated according to the following equation:\n", "\n", " \\begin{equation}\n", " \\bar g_i = \\bar g_i + M(t)\\bar g_{max}\n", " \\end{equation}\n", "\n", " Note that, $M(t)$ tracks the time since the last postsynaptic potential and is always negative. Hence, if *the postsynaptic neuron spikes shortly before the presynaptic neuron*, the above equation shows that the peak conductance will decrease.\n", "\n", "* When the postsynaptic neuron spikes, the peak conductance of **each** synapse is updated according to:\n", "\n", " \\begin{equation}\n", " \\bar g_i = \\bar g_i + P_i(t)\\bar g_{max}, \\forall i\n", " \\end{equation}\n", "\n", " Note that, $P_i(t)$ tracks the time since the last spike of $i_{th}$ pre-synaptic neuron and is always positive.\n", "\n", " Thus, the equation given above shows that if the presynaptic neuron spikes before the postsynaptic neuron, its peak conductance will increase." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "### LIF neuron connected with synapses that show STDP\n", "\n", "In the following exercise, we connect $N$ presynaptic neurons to a single postsynaptic neuron. We do not need to simulate the dynamics of each presynaptic neuron as we are only concerned about their spike times. So, we will generate $N$ Poisson type spikes. Here, we will assume that all these inputs are excitatory.\n", "\n", "We need to simulate the dynamics of the postsynaptic neuron as we do not know its spike times. We model the postsynaptic neuron as an LIF neuron receiving only excitatory inputs.\n", "\n", "\\begin{equation}\n", "\\tau_m\\frac{dV}{dt} = -(V-E_L) - g_E(t) (V(t)-E_E)\\,\n", "\\end{equation}\n", "\n", "where the total excitatory synaptic conductance $g_{E}(t)$ is given by:\n", "\n", "\\begin{equation}\n", "g_E(t) = \\sum_{i=1}^{N} g_i(t)\n", "\\end{equation}\n", "\n", "While simulating STDP, it is important to make sure that $\\bar g_i$ never goes outside of its bounds." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Function for LIF neuron with STDP synapses\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Function for LIF neuron with STDP synapses\n", "\n", "def run_LIF_cond_STDP(pars, pre_spike_train_ex):\n", " \"\"\"\n", " conductance-based LIF dynamics\n", "\n", " Args:\n", " pars : parameter dictionary\n", " pre_spike_train_ex : spike train input from presynaptic excitatory neuron\n", "\n", " Returns:\n", " rec_spikes : spike times\n", " rec_v : mebrane potential\n", " gE : postsynaptic excitatory conductance\n", " \"\"\"\n", "\n", " # Retrieve parameters\n", " V_th, V_reset = pars['V_th'], pars['V_reset']\n", " tau_m = pars['tau_m']\n", " V_init, V_L = pars['V_init'], pars['V_L']\n", " gE_bar, VE, tau_syn_E = pars['gE_bar'], pars['VE'], pars['tau_syn_E']\n", " gE_init = pars['gE_init']\n", " tref = pars['tref']\n", " A_minus, tau_stdp = pars['A_minus'], pars['tau_stdp']\n", " dt, range_t = pars['dt'], pars['range_t']\n", " Lt = range_t.size\n", "\n", " P = generate_P(pars, pre_spike_train_ex)\n", "\n", " # Initialize\n", " tr = 0.\n", " v = np.zeros(Lt)\n", " v[0] = V_init\n", " M = np.zeros(Lt)\n", " gE = np.zeros(Lt)\n", " gE_bar_update = np.zeros(pre_spike_train_ex.shape)\n", " gE_bar_update[:, 0] = gE_init # note: gE_bar is the maximum value\n", "\n", " # simulation\n", " rec_spikes = [] # recording spike times\n", " for it in range(Lt - 1):\n", " if tr > 0:\n", " v[it] = V_reset\n", " tr = tr - 1\n", " elif v[it] >= V_th: # reset voltage and record spike event\n", " rec_spikes.append(it)\n", " v[it] = V_reset\n", " M[it] = M[it] - A_minus\n", " gE_bar_update[:, it] = gE_bar_update[:, it] + P[:, it] * gE_bar\n", " id_temp = gE_bar_update[:, it] > gE_bar\n", " gE_bar_update[id_temp, it] = gE_bar\n", " tr = tref / dt\n", "\n", " # update the synaptic conductance\n", " M[it + 1] = M[it] - dt / tau_stdp * M[it]\n", " gE[it + 1] = gE[it] - (dt / tau_syn_E) * gE[it] + (gE_bar_update[:, it] * pre_spike_train_ex[:, it]).sum()\n", " gE_bar_update[:, it + 1] = gE_bar_update[:, it] + M[it]*pre_spike_train_ex[:, it]*gE_bar\n", " id_temp = gE_bar_update[:, it + 1] < 0\n", " gE_bar_update[id_temp, it + 1] = 0.\n", "\n", " # calculate the increment of the membrane potential\n", " dv = (-(v[it] - V_L) - gE[it + 1] * (v[it] - VE)) * (dt / tau_m)\n", "\n", " # update membrane potential\n", " v[it + 1] = v[it] + dv\n", "\n", " rec_spikes = np.array(rec_spikes) * dt\n", "\n", " return v, rec_spikes, gE, P, M, gE_bar_update\n", "\n", "print(help(run_LIF_cond_STDP))" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Evolution of excitatory synaptic conductance\n", "\n", "In the following, we will simulate an LIF neuron receiving input from $N=300$ presynaptic neurons." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "pars = default_pars_STDP(T=200., dt=1.) # Simulation duration 200 ms\n", "pars['gE_bar'] = 0.024 # max synaptic conductance\n", "pars['gE_init'] = 0.024 # initial synaptic conductance\n", "pars['VE'] = 0. # [mV] Synapse reversal potential\n", "pars['tau_syn_E'] = 5. # [ms] EPSP time constant\n", "\n", "# generate Poisson type spike trains\n", "pre_spike_train_ex = Poisson_generator(pars, rate=10, n=300, myseed=2020)\n", "# simulate the LIF neuron and record the synaptic conductance\n", "v, rec_spikes, gE, P, M, gE_bar_update = run_LIF_cond_STDP(pars,\n", " pre_spike_train_ex)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Figures of the evolution of synaptic conductance\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Run this cell to see the figures!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Figures of the evolution of synaptic conductance\n", "\n", "# @markdown Run this cell to see the figures!\n", "plt.figure(figsize=(12., 8))\n", "plt.subplot(321)\n", "dt, range_t = pars['dt'], pars['range_t']\n", "if rec_spikes.size:\n", " sp_num = (rec_spikes / dt).astype(int) - 1\n", " v[sp_num] += 10 # add artificial spikes\n", "plt.plot(pars['range_t'], v, 'k')\n", "plt.xlabel('Time (ms)')\n", "plt.ylabel('V (mV)')\n", "\n", "plt.subplot(322)\n", "# Plot the sample presynaptic spike trains\n", "my_raster_plot(pars['range_t'], pre_spike_train_ex, 10)\n", "\n", "plt.subplot(323)\n", "plt.plot(pars['range_t'], M, 'k')\n", "plt.xlabel('Time (ms)')\n", "plt.ylabel('M')\n", "\n", "plt.subplot(324)\n", "for i in range(10):\n", " plt.plot(pars['range_t'], P[i, :])\n", "plt.xlabel('Time (ms)')\n", "plt.ylabel('P')\n", "\n", "plt.subplot(325)\n", "for i in range(10):\n", " plt.plot(pars['range_t'], gE_bar_update[i, :])\n", "plt.xlabel('Time (ms)')\n", "plt.ylabel(r'$\\bar g$')\n", "\n", "plt.subplot(326)\n", "plt.plot(pars['range_t'], gE, 'r')\n", "plt.xlabel('Time (ms)')\n", "plt.ylabel(r'$g_E$')\n", "\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Think! 2A: Analyzing Synaptic Strength and Conductance Transformations\n", "* In the above, even though all the presynaptic neurons have the same average firing rate, many of the synapses seem to have been weakened? Did you expect that?\n", "\n", "* Total synaptic conductance is fluctuating over time. How do you expect $g_E$ to fluctuate if synapses did not show any STDP like behavior?\n", "\n", "* Do synaptic weights ever reach a stationary state when synapses show STDP?" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial4_Solution_310f9f8f.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Analyzing_synaptic_strength_Discussion\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Distribution of synaptic weight\n", "From the example given above, we get an idea that some synapses depotentiate, but what is the distribution of the synaptic weights when synapses show STDP?\n", "\n", "In fact, it is possible that even the synaptic weight distribution itself is a time-varying quantity. So, we would like to know how the distribution of synaptic weights evolves as a function of time.\n", "\n", "To get a better estimate of the weight distribution and its time evolution, we will increase the presynaptic firing rate to $15$Hz and simulate the postsynaptic neuron for $120$s." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Functions for simulating a LIF neuron with STDP synapses\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Functions for simulating a LIF neuron with STDP synapses\n", "\n", "\n", "def example_LIF_STDP(inputrate=15., Tsim=120000.):\n", " \"\"\"\n", " Simulation of a LIF model with STDP synapses\n", "\n", " Args:\n", " intputrate : The rate used for generate presynaptic spike trains\n", " Tsim : Total simulation time\n", "\n", " output:\n", " Interactive demo, Visualization of synaptic weights\n", " \"\"\"\n", "\n", " pars = default_pars_STDP(T=Tsim, dt=1.)\n", " pars['gE_bar'] = 0.024\n", " pars['gE_init'] = 0.014 # initial synaptic conductance\n", " pars['VE'] = 0. # [mV]\n", " pars['tau_syn_E'] = 5. # [ms]\n", "\n", " starttime = time.perf_counter()\n", " pre_spike_train_ex = Poisson_generator(pars, rate=inputrate, n=300,\n", " myseed=2020) # generate Poisson trains\n", " v, rec_spikes, gE, P, M, gE_bar_update = run_LIF_cond_STDP(pars,\n", " pre_spike_train_ex) # simulate LIF neuron with STDP\n", " gbar_norm = gE_bar_update/pars['gE_bar'] # calculate the ratio of the synaptic conductance\n", " endtime = time.perf_counter()\n", " timecost = (endtime - starttime) / 60.\n", "\n", " print('Total simulation time is %.3f min' % timecost)\n", "\n", " my_layout.width = '620px'\n", "\n", " @widgets.interact(\n", " sample_time=widgets.FloatSlider(0.5, min=0., max=1., step=0.1,\n", " layout=my_layout)\n", " )\n", "\n", "\n", " def my_visual_STDP_distribution(sample_time=0.0):\n", " sample_time = int(sample_time * pars['range_t'].size) - 1\n", " sample_time = sample_time * (sample_time > 0)\n", "\n", " plt.figure(figsize=(8, 8))\n", " ax1 = plt.subplot(211)\n", " for i in range(50):\n", " ax1.plot(pars['range_t'][::1000] / 1000., gE_bar_update[i, ::1000], lw=1., alpha=0.7)\n", "\n", " ax1.axvline(1e-3 * pars['range_t'][sample_time], 0., 1., color='k', ls='--')\n", " ax1.set_ylim(0, 0.025)\n", " ax1.set_xlim(-2, 122)\n", " ax1.set_xlabel('Time (s)')\n", " ax1.set_ylabel(r'$\\bar{g}$')\n", "\n", " bins = np.arange(-.05, 1.05, .05)\n", " g_dis, _ = np.histogram(gbar_norm[:, sample_time], bins)\n", " ax2 = plt.subplot(212)\n", " ax2.bar(bins[1:], g_dis, color='b', alpha=0.5, width=0.05)\n", " ax2.set_xlim(-0.1, 1.1)\n", " ax2.set_xlabel(r'$\\bar{g}/g_{\\mathrm{max}}$')\n", " ax2.set_ylabel('Number')\n", " ax2.set_title(('Time = %.1f s' % (1e-3 * pars['range_t'][sample_time])),\n", " fontweight='bold')\n", " plt.show()\n", "\n", "\n", "print(help(example_LIF_STDP))" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "### Interactive Demo 2: Example of an LIF model with STDP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Make sure you execute this cell to enable the widget!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @markdown Make sure you execute this cell to enable the widget!\n", "example_LIF_STDP(inputrate=15)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Think! 2B: Effects of Increased Presynaptic Firing Rate\n", "Increase the firing rate (i.e., 30 Hz) of presynaptic neurons, and investigate the effect on the dynamics of synaptic weight distribution." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial4_Solution_baab774a.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_LIF_and_STDP_Interactive_Demo_and_Discussion\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 3: Effect of input correlations\n", "\n", "Thus far, we assumed that the input population was uncorrelated. What do you think will happen if presynaptic neurons were correlated?\n", "\n", "In the following, we will modify the input such that first $L$ neurons have identical spike trains while the remaining inputs are uncorrelated. This is a highly simplified model of introducing correlations. You can try to code your own model of correlated spike trains." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Function for LIF neuron with STDP synapses receiving correlated inputs\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Function for LIF neuron with STDP synapses receiving correlated inputs\n", "\n", "\n", "def example_LIF_STDP_corrInput(inputrate=20., Tsim=120000.):\n", " \"\"\"\n", " A LIF model equipped with STDP synapses, receiving correlated inputs\n", "\n", " Args:\n", " intputrate : The rate used for generate presynaptic spike trains\n", " Tsim : Total simulation time\n", "\n", " Returns:\n", " Interactive demo: Visualization of synaptic weights\n", " \"\"\"\n", "\n", " np.random.seed(2020)\n", " pars = default_pars_STDP(T=Tsim, dt=1.)\n", " pars['gE_bar'] = 0.024\n", " pars['VE'] = 0. # [mV]\n", " pars['gE_init'] = 0.024 * np.random.rand(300) # initial synaptic conductance\n", " pars['tau_syn_E'] = 5. # [ms]\n", "\n", " starttime = time.perf_counter()\n", " pre_spike_train_ex = Poisson_generator(pars, rate=inputrate, n=300, myseed=2020)\n", " for i_pre in range(50):\n", " pre_spike_train_ex[i_pre, :] = pre_spike_train_ex[0, :] # simple way to set input correlated\n", " v, rec_spikes, gE, P, M, gE_bar_update = run_LIF_cond_STDP(pars, pre_spike_train_ex) # simulate LIF neuron with STDP\n", " gbar_norm = gE_bar_update / pars['gE_bar'] # calculate the ratio of the synaptic conductance\n", " endtime = time.perf_counter()\n", " timecost = (endtime - starttime) / 60.\n", "\n", " print(f'Total simulation time is {timecost:.3f} min')\n", "\n", " my_layout.width = '620px'\n", "\n", " @widgets.interact(\n", " sample_time=widgets.FloatSlider(0.5, min=0., max=1., step=0.05,\n", " layout=my_layout)\n", " )\n", "\n", "\n", " def my_visual_STDP_distribution(sample_time=0.0):\n", "\n", " sample_time = int(sample_time * pars['range_t'].size) - 1\n", " sample_time = sample_time*(sample_time > 0)\n", " figtemp = plt.figure(figsize=(8, 8))\n", " ax1 = plt.subplot(211)\n", " for i in range(50):\n", " ax1.plot(pars['range_t'][::1000] / 1000.,\n", " gE_bar_update[i, ::1000], lw=1., color='r', alpha=0.7, zorder=2)\n", "\n", " for i in range(50):\n", " ax1.plot(pars['range_t'][::1000] / 1000.,\n", " gE_bar_update[i + 50, ::1000], lw=1., color='b',\n", " alpha=0.5, zorder=1)\n", "\n", " ax1.axvline(1e-3 * pars['range_t'][sample_time], 0., 1.,\n", " color='k', ls='--', zorder=2)\n", " ax1.set_ylim(0, 0.025)\n", " ax1.set_xlim(-2, 122)\n", " ax1.set_xlabel('Time (s)')\n", " ax1.set_ylabel(r'$\\bar{g}$')\n", " legend=ax1.legend(['Correlated input', 'Uncorrelated iput'], fontsize=18,\n", " loc=[0.92, -0.6], frameon=False)\n", " for color, text, item in zip(['r', 'b'], legend.get_texts(), legend.legend_handles):\n", " text.set_color(color)\n", " item.set_visible(False)\n", "\n", " bins = np.arange(-.05, 1.05, .05)\n", " g_dis_cc, _ = np.histogram(gbar_norm[:50, sample_time], bins)\n", " g_dis_dp, _ = np.histogram(gbar_norm[50:, sample_time], bins)\n", "\n", " ax2 = plt.subplot(212)\n", " ax2.bar(bins[1:], g_dis_cc, color='r', alpha=0.5, width=0.05)\n", " ax2.bar(bins[1:], g_dis_dp, color='b', alpha=0.5, width=0.05)\n", " ax2.set_xlim(-0.1, 1.1)\n", " ax2.set_xlabel(r'$\\bar{g}/g_{\\mathrm{max}}$')\n", " ax2.set_ylabel('Number')\n", " ax2.set_title(('Time = %.1f s' % (1e-3 * pars['range_t'][sample_time])),\n", " fontweight='bold')\n", " plt.show()\n", "\n", "\n", "print(help(example_LIF_STDP_corrInput))" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "### Interactive Demo 3: LIF model with plastic synapses receiving correlated inputs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Make sure you execute this cell to enable the widget!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @markdown Make sure you execute this cell to enable the widget!\n", "example_LIF_STDP_corrInput(inputrate=10.0)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "**Why do weights of uncorrelated neurons decrease when synapses show STDP?**\n", "\n", "Above, we notice that the synapses of correlated neurons (which spike together) were almost unaffected, but the weights of other neurons diminished. Why does this happen?\n", "\n", "The reason is that the correlated presynaptic neurons have a higher chance of eliciting a spike in the postsynaptic neurons and that create a $pre \\rightarrow post$ pairing of spikes." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Think! 3: Applications of STDP\n", "\n", "* Modify the code above and create two groups of correlated presynaptic neurons and test what happens to the weight distribution.\n", "\n", "* How can the above observations be used to create unsupervised learning? Could you imagine how we have to train a neuronal model enabled with STDP rule to identify input patterns?\n", "\n", "* What else can be done with this type of plasticity?\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial4_Solution_ba8d9ea4.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_LIF_plasticity_correlated_inputs_Interactive_Demo_and_Discussion\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "---\n", "# Summary\n", "\n", "Hooray! You have just finished this loooong day! In this tutorial, we covered the concept of **spike-timing dependent plasticity (STDP)**.\n", "\n", "We managed to:\n", "\n", "- build a model of synapse that shows STDP.\n", "\n", "- study how correlations in input spike trains influence the distribution of synaptic weights.\n", "\n", "Using presynaptic inputs as Poisson type spike trains, we modeled an LIF model with synapses equipped with STDP. We also studied the effect of correlated inputs on the synaptic strength!" ] } ], "metadata": { "colab": { "collapsed_sections": [], "include_colab_link": true, "name": "W2D3_Tutorial4", "provenance": [], "toc_visible": true }, "kernel": { "display_name": "Python 3", "language": "python", "name": "python3" }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.17" } }, "nbformat": 4, "nbformat_minor": 0 }