{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {},
"id": "view-in-github"
},
"source": [
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"# Tutorial 3: Combining determinism and stochasticity\n",
"**Week 2, Day 2: Linear Systems**\n",
"\n",
"**By Neuromatch Academy**\n",
"\n",
"**Content Creators**: Bing Wen Brunton, Alice Schwarze, Biraj Pandey\n",
"\n",
"**Content Reviewers**: Norma Kuhn, John Butler, Matthew Krause, Ella Batty, Richard Gao, Michael Waskom\n",
"\n",
"**Post-Production Team:** Gagana B, Spiros Chavlis"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Tutorial Objectives\n",
"\n",
"*Estimated timing of tutorial: 45 minutes*\n",
"\n",
"Time-dependent processes rule the world. \n",
"\n",
"Now that we've spent some time familiarizing ourselves with the behavior of such systems when their trajectories are (1) entirely predictable and deterministic, or (2) governed by random processes, it's time to consider that neither is sufficient to describe neuroscience. Instead, we are often faced with processes for which we know some dynamics, but there are some random aspects as well. We call these **dynamical systems with stochasticity**.\n",
"\n",
"This tutorial will build on our knowledge and gain some intuition for how deterministic and stochastic processes can both be a part of a dynamical system by:\n",
"\n",
"* Simulating random walks \n",
"* Investigating the mean and variance of a Ornstein-Uhlenbeck (OU) process\n",
"* Quantifying the OU process's behavior at equilibrium."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"\n",
"# @title Tutorial slides\n",
"# @markdown These are the slides for the videos in all tutorials today\n",
"from IPython.display import IFrame\n",
"link_id = \"snv4m\"\n",
"print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n",
"IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Imports\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Figure Settings\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Figure Settings\n",
"import ipywidgets as widgets # interactive display\n",
"%config InlineBackend.figure_format = 'retina'\n",
"plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/main/nma.mplstyle\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plotting Functions\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Plotting Functions\n",
"# drift-diffusion model\n",
"# returns t, x\n",
"\n",
"def plot_random_walk_sims(sims, nsims=10):\n",
" \"\"\"Helper for exercise 3A\"\"\"\n",
" fig = plt.figure()\n",
" plt.plot(sim[:nsims, :].T)\n",
" plt.xlabel('time')\n",
" plt.ylabel('position x')\n",
" plt.show()\n",
"\n",
"def plot_mean_var_by_timestep(mu, var):\n",
" \"\"\"Helper function for exercise 3A.2\"\"\"\n",
" fig, (ah1, ah2) = plt.subplots(2)\n",
"\n",
" # plot mean of distribution as a function of time\n",
" ah1.plot(mu)\n",
" ah1.set(ylabel='mean')\n",
" ah1.set_ylim([-5, 5])\n",
"\n",
" # plot variance of distribution as a function of time\n",
" ah2.plot(var)\n",
" ah2.set(xlabel='time')\n",
" ah2.set(ylabel='variance')\n",
"\n",
" plt.show()\n",
"\n",
"def plot_ddm(t, x, xinfty, lam, x0):\n",
" fig = plt.figure()\n",
"\n",
" plt.plot(t, xinfty * (1 - lam**t) + x0 * lam**t, 'r')\n",
" plt.plot(t, x, 'k.') # simulated data pts\n",
"\n",
" plt.xlabel('t')\n",
" plt.ylabel('x')\n",
"\n",
" plt.legend({'deterministic solution', 'simulation'})\n",
" plt.show()\n",
"\n",
"def var_comparison_plot(empirical, analytical):\n",
" fig = plt.figure()\n",
" plt.plot(empirical, analytical, '.', markersize=15)\n",
" plt.xlabel('empirical equilibrium variance')\n",
" plt.ylabel('analytic equilibrium variance')\n",
" plt.plot(np.arange(8), np.arange(8), 'k', label='45 deg line')\n",
" plt.legend()\n",
"\n",
" plt.grid(True)\n",
" plt.show()\n",
"\n",
"def plot_dynamics(x, t, lam, xinfty=0):\n",
" \"\"\" Plot the dynamics \"\"\"\n",
" fig = plt.figure()\n",
" plt.title('$\\lambda=%0.1f$' % lam, fontsize=16)\n",
" x0 = x[0]\n",
" plt.plot(t, xinfty + (x0 - xinfty) * lam**t, 'r', label='analytic solution')\n",
" plt.plot(t, x, 'k.', label='simulation') # simulated data pts\n",
" plt.ylim(0, x0+1)\n",
"\n",
" plt.xlabel('t')\n",
" plt.ylabel('x')\n",
" plt.legend()\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 1: Random Walks\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Video 1: E. coli and Random Walks\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @title Video 1: E. coli and Random Walks\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}') \n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', 'VHwTBCQJjfw'), ('Bilibili', 'BV1LC4y1h7gD')]\n",
"tab_contents = display_videos(video_ids, W=730, H=410)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"To begin, let's first take a gander at how life sometimes wanders around aimlessly. One of the simplest and best-studied living systems that has some interesting behaviors is the _E. coli_ bacterium, which is capable of navigating odor gradients on a substrate to seek a food source. Larger life (including flies, dogs, and blindfolded humans) sometimes use the same strategies to guide their decisions.\n",
"\n",
"Here, we will consider what the _E. coli_ does in the absence of food odors. What's the best strategy when one does not know where to head? Why, flail around randomly, of course!\n",
"\n",
"The **random walk** is exactly that --- at every time step, use a random process like flipping a coin to change one's heading accordingly. Note that this process is closely related to _Brownian motion_, so you may sometimes hear that terminology used as well."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Let's start with a **one-dimensional random walk**. A bacterium starts at $x=0$. At every time step, it flips a coin (a very small, microscopic coin of protein mintage), then heads left $\\Delta x = -1$ or right $\\Delta x = +1$ for with equal probability. For instance, if at time step $1$ the result of the coin flip is to head right, then its position at that time step becomes $x_1 = x_0 + \\Delta x = 1.$ Continuing in this way, its position at time step $k+1$ is given by\n",
"\n",
"\\begin{equation}\n",
"x_{k+1} = x_k + \\Delta x\n",
"\\end{equation}\n",
"\n",
"We will simulate this process below and plot the position of the bacterium as a function of the time step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Execute to simulate a random walk\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown Execute to simulate a random walk\n",
"# parameters of simulation\n",
"T = 100\n",
"t = np.arange(T)\n",
"x = np.zeros_like(t)\n",
"np.random.seed(2020) # set random seed\n",
"\n",
"# initial position\n",
"x[0] = 0\n",
"\n",
"# step forward in time\n",
"for k in range(len(t)-1):\n",
" # choose randomly between -1 and 1 (coin flip)\n",
" this_step = np.random.choice([-1,1])\n",
"\n",
" # make the step\n",
" x[k+1] = x[k] + this_step\n",
"\n",
"# plot this trajectory\n",
"fig = plt.figure()\n",
"plt.step(t, x)\n",
"plt.xlabel('time')\n",
"plt.ylabel('position x');"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 1A: Random walk simulation\n",
"\n",
"*Referred to in video as exercise 3A*\n",
"\n",
"In the previous plot, we assumed that the bacterium takes a step of size $1$ at every point in time. Let it take steps of different sizes!\n",
"\n",
"We will code a random walk where the steps have a standard normal distribution (with mean $\\mu$ and standard deviation $\\sigma$). Instead of running one trajectory at a time, we will write our code so that we can simulate a large number of trajectories efficiently. We will combine this all into a function ``random_walk_simulator`` that generates $N$ random walks each with $T$ time points efficiently.\n",
"\n",
"We will plot 10 random walks for 10000 time steps each."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def random_walk_simulator(N, T, mu=0, sigma=1):\n",
" '''Simulate N random walks for T time points. At each time point, the step\n",
" is drawn from a Gaussian distribution with mean mu and standard deviation\n",
" sigma.\n",
"\n",
" Args:\n",
" T (integer) : Duration of simulation in time steps\n",
" N (integer) : Number of random walks\n",
" mu (float) : mean of step distribution\n",
" sigma (float) : standard deviation of step distribution\n",
"\n",
" Returns:\n",
" (numpy array) : NxT array in which each row corresponds to trajectory\n",
" '''\n",
"\n",
" ###############################################################################\n",
" ## TODO: Code the simulated random steps to take\n",
" ## Hints: you can generate all the random steps in one go in an N x T matrix\n",
" raise NotImplementedError('Complete random_walk_simulator_function')\n",
" ###############################################################################\n",
" # generate all the random steps for all steps in all simulations in one go\n",
" # produces a N x T array\n",
" steps = np.random.normal(..., ..., size=(..., ...))\n",
"\n",
" # compute the cumulative sum of all the steps over the time axis\n",
" sim = np.cumsum(steps, axis=1)\n",
"\n",
" return sim\n",
"\n",
"\n",
"np.random.seed(2020) # set random seed\n",
"\n",
"# simulate 1000 random walks for 10000 time steps\n",
"sim = random_walk_simulator(1000, 10000, mu=0, sigma=1)\n",
"\n",
"# take a peek at the first 10 simulations\n",
"plot_random_walk_sims(sim, nsims=10)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_39e8cb46.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"We see that the trajectories all look a little different from each other. But there are some general observations one can make: at the beginning almost all trajectories are very close to $x=0$, which is where our bacterium started. As time progresses, some trajectories move further and further away from the starting point. However, a lot of trajectories stay close to the starting point of $x=0$. \n",
"\n",
"Now let's take a look in the next cell at the distribution of bacteria positions at different points in time, analyzing all the trajectories we just generated above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Execute to visualize distribution of bacteria positions\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown Execute to visualize distribution of bacteria positions\n",
"fig = plt.figure()\n",
"# look at the distribution of positions at different times\n",
"for i, t in enumerate([1000,2500,10000]):\n",
"\n",
" # get mean and standard deviation of distribution at time t\n",
" mu = sim[:, t-1].mean()\n",
" sig2 = sim[:, t-1].std()\n",
"\n",
" # make a plot label\n",
" mytitle = '$t=${time:d} ($\\mu=${mu:.2f}, $\\sigma=${var:.2f})'\n",
"\n",
" # plot histogram\n",
" plt.hist(sim[:,t-1],\n",
" color=['blue','orange','black'][i],\n",
" #make sure the histograms have the same bins!\n",
" bins=np.arange(-300,300,20),\n",
" # make histograms a little see-through\n",
" alpha=0.6,\n",
" # draw second histogram behind the first one\n",
" zorder=3-i,\n",
" label=mytitle.format(time=t, mu=mu, var=sig2))\n",
"\n",
" plt.xlabel('position x')\n",
"\n",
" # plot range\n",
" plt.xlim([-500, 250])\n",
"\n",
" # add legend\n",
" plt.legend(loc=2)\n",
"\n",
" # add title\n",
" plt.title(r'Distribution of trajectory positions at time $t$')"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"At the beginning of the simulation, the distribution of positions is sharply peaked about $0$. As time progresses, the distribution becomes wider but its center stays closer to $0$. In other words, the mean of the distribution is independent of time, but the variance and standard deviation of the distribution scale with time. Such a process is called a **diffusive process**."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 1B: Random walk mean & variance\n",
"\n",
"Compute and then plot the mean and variance of our bacterium's random walk as a function of time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "code",
"execution": {}
},
"outputs": [],
"source": [
"# Simulate random walks\n",
"np.random.seed(2020) # set random seed\n",
"sim = random_walk_simulator(5000, 1000, mu=0, sigma=1)\n",
"\n",
"##############################################################################\n",
"# TODO: Insert your code here to compute the mean and variance of trajectory positions\n",
"# at every time point:\n",
"raise NotImplementedError(\"Student exercise: need to compute mean and variance\")\n",
"##############################################################################\n",
"\n",
"# Compute mean\n",
"mu = ...\n",
"\n",
"# Compute variance\n",
"var = ...\n",
"\n",
"# Visualize\n",
"plot_mean_var_by_timestep(mu, var)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_796a6346.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"The expected value of $x$ stays close to 0, even for random walks of very long time. Cool!\n",
"\n",
"The variance, on the other hand, clearly increases with time. In fact, the variance seems to increase linearly with time!"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Interactive Demo 1: Influence of Parameter Choice\n",
"\n",
" How do the parameters $\\mu$ and $\\sigma$ of the Gaussian distribution from which we choose the steps affect the mean and variance of the bacterium's random walk?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Make sure you execute this cell to enable the widget!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title\n",
"\n",
"# @markdown Make sure you execute this cell to enable the widget!\n",
"\n",
"\n",
"@widgets.interact\n",
"def plot_gaussian(mean=(-0.5, 0.5, .02), std=(.5, 10, .5)):\n",
" sim = random_walk_simulator(5000, 1000, mu=mean, sigma=std)\n",
"\n",
" # compute the mean and variance of trajectory positions at every time point\n",
" mu = np.mean(sim, axis=0)\n",
" var = np.var(sim, axis=0)\n",
"\n",
" # make a figure\n",
" fig, (ah1, ah2) = plt.subplots(2)\n",
"\n",
" # plot mean of distribution as a function of time\n",
" ah1.plot(mu)\n",
" ah1.set(ylabel='mean')\n",
"\n",
" # plot variance of distribution as a function of time\n",
" ah2.plot(var)\n",
" ah2.set(xlabel='time')\n",
" ah2.set(ylabel='variance')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_57472c71.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 2: The Ornstein-Uhlenbeck (OU) process\n",
"\n",
"*Estimated timing to here from start of tutorial: 14 min*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Video 2: Combining Deterministic & Stochastic Processes\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @title Video 2: Combining Deterministic & Stochastic Processes\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}') \n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', 'pDNfs5p38fI'), ('Bilibili', 'BV1o5411Y7N2')]\n",
"tab_contents = display_videos(video_ids, W=730, H=410)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"The random walk process we just explored is diffusive, and the distribution of possible trajectories _spreads_, taking on increasing variance with time. Even so, at least in one dimension, the mean remains close to the initial value (in the example above, 0).\n",
"\n",
"Our goal is now to build on this model to construct a **drift-diffusion** model (DDM). DDM is a popular model for memory, which as we all know, is often an exercise in hanging on to a value imperfectly. Decision-making and memory will be the topic for tomorrow, so here we build the mathematical foundations and develop some intuition for how such systems behave!"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"To build such a model, let's combine the random walk model with the first differential equations we explored in Tutorial 1 earlier. Although those models had been written in continuous time as $\\dot{x} = a x$, here let's consider the discrete version of the same system and write:\n",
"\n",
"$x_{k+1} = \\lambda x_k$,\n",
"\n",
"whose solution can be written as\n",
"\n",
"$x_k = x_0 \\lambda^k$,\n",
"\n",
"where $x_0$ is the value of $x$ at time $t=0$.\n",
"\n",
"Now, let's simulate and plot the solution of the discrete version of our first differential equation from Tutorial 1 below. **Run the code below.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"execution": {}
},
"outputs": [],
"source": [
"# parameters\n",
"lam = 0.9\n",
"T = 100 # total Time duration in steps\n",
"x0 = 4. # initial condition of x at time 0\n",
"\n",
"# initialize variables\n",
"t = np.arange(0, T, 1.)\n",
"x = np.zeros_like(t)\n",
"x[0] = x0\n",
"\n",
"# Step through in time\n",
"for k in range(len(t)-1):\n",
" x[k+1] = lam * x[k]\n",
"\n",
"# plot x as it evolves in time\n",
"plot_dynamics(x, t, lam)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Notice that this process decays towards position $x=0$. We can make it decay towards any position by adding another parameter $x_\\infty$. The rate of decay is proportional to the difference between $x$ and $x_\\infty$. Our new system is\n",
"\n",
"\\begin{equation}\n",
"x_{k+1} = x_\\infty + \\lambda(x_k - x_{\\infty})\n",
"\\end{equation}\n",
"\n",
"We have to modify our analytic solution slightly to take this into account:\n",
"\n",
"\\begin{equation}\n",
"x_k = x_\\infty(1 - \\lambda^k) + x_0 \\lambda^k.\n",
"\\end{equation}\n",
"\n",
"Let's simulate and plot the dynamics of this process below. Hopefully, we see that it start at $x_0$ and decay towards $x_{\\infty}.$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"execution": {}
},
"outputs": [],
"source": [
"# parameters\n",
"lam = 0.9 # decay rate\n",
"T = 100 # total Time duration in steps\n",
"x0 = 4. # initial condition of x at time 0\n",
"xinfty = 1. # x drifts towards this value in long time\n",
"\n",
"# initialize variables\n",
"t = np.arange(0, T, 1.)\n",
"x = np.zeros_like(t)\n",
"x[0] = x0\n",
"\n",
"# Step through in time\n",
"for k in range(len(t)-1):\n",
" x[k+1] = xinfty + lam * (x[k] - xinfty)\n",
"\n",
"# plot x as it evolves in time\n",
"plot_dynamics(x, t, lam, xinfty)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Now we are ready to take this basic, deterministic difference equation and add a diffusion process on top of it! Fun times in Python land.\n",
"\n",
"As a point of terminology: this type of process is commonly known as a **drift-diffusion model** or **Ornstein-Uhlenbeck (OU) process**. The model is a combination of a _drift_ term toward $x_{\\infty}$ and a _diffusion_ term that walks randomly. You may sometimes see them written as continuous stochastic differential equations, but here we are doing the discrete version to maintain continuity in the tutorial. The discrete version of our OU process has the following form:\n",
"\n",
"\\begin{equation}\n",
"x_{k+1} = x_\\infty + \\lambda(x_k - x_{\\infty}) + \\sigma \\eta\n",
"\\end{equation}\n",
"\n",
"where $\\eta$ is sampled from a standard normal distribution ($\\mu=0, \\sigma=1$). \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 2: Drift-diffusion model\n",
"\n",
"Modify the code below so that each step through time has a _deterministic_ part (_hint_: exactly like the code above) plus a _random, diffusive_ part that is drawn from from a normal distribution with standard deviation of $\\sigma$ (sig in the code). It will plot the dynamics of this process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def simulate_ddm(lam, sig, x0, xinfty, T):\n",
" \"\"\"\n",
" Simulate the drift-diffusion model with given parameters and initial condition.\n",
" Args:\n",
" lam (scalar): decay rate\n",
" sig (scalar): standard deviation of normal distribution\n",
" x0 (scalar): initial condition (x at time 0)\n",
" xinfty (scalar): drift towards convergence in the limit\n",
" T (scalar): total duration of the simulation (in steps)\n",
"\n",
" Returns:\n",
" ndarray, ndarray: `x` for all simulation steps and the time `t` at each step\n",
" \"\"\"\n",
"\n",
" # initialize variables\n",
" t = np.arange(0, T, 1.)\n",
" x = np.zeros_like(t)\n",
" x[0] = x0\n",
"\n",
" # Step through in time\n",
" for k in range(len(t)-1):\n",
" ##############################################################################\n",
" ## TODO: Insert your code below then remove\n",
" raise NotImplementedError(\"Student exercise: need to implement simulation\")\n",
" ##############################################################################\n",
" # update x at time k+1 with a deterministic and a stochastic component\n",
" # hint: the deterministic component will be like above, and\n",
" # the stochastic component is drawn from a scaled normal distribution\n",
" x[k+1] = ...\n",
"\n",
" return t, x\n",
"\n",
"\n",
"lam = 0.9 # decay rate\n",
"sig = 0.1 # standard deviation of diffusive process\n",
"T = 500 # total Time duration in steps\n",
"x0 = 4. # initial condition of x at time 0\n",
"xinfty = 1. # x drifts towards this value in long time\n",
"\n",
"# Plot x as it evolves in time\n",
"np.random.seed(2020)\n",
"t, x = simulate_ddm(lam, sig, x0, xinfty, T)\n",
"plot_ddm(t, x, xinfty, lam, x0)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_48874ea8.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Think! 2: Drift-Diffusion Simulation Observations\n",
"\n",
"Describe the behavior of your simulation by making some observations. How does it compare to the deterministic solution? How does it behave in the beginning of the stimulation? At the end?"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_301f6f83.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 3: Variance of the OU process\n",
"\n",
"*Estimated timing to here from start of tutorial: 35 min*\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Video 3: Balance of Variances\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @title Video 3: Balance of Variances\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}') \n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', '49A-3kftau0'), ('Bilibili', 'BV15f4y1R7PU')]\n",
"tab_contents = display_videos(video_ids, W=730, H=410)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"As we can see, the **mean** of the process follows the solution to the deterministic part of the governing equation. So far, so good!\n",
"\n",
"But what about the **variance**? \n",
"\n",
"Unlike the random walk, because there's a decay process that \"pulls\" $x$ back towards $x_\\infty$, the variance does not grow without bound with large $t$. Instead, when it gets far from $x_\\infty$, the position of $x$ is restored, until an equilibrium is reached.\n",
"\n",
"The equilibrium variance for our drift-diffusion system is\n",
"\n",
"\\begin{equation}\n",
"\\text{Var} = \\frac{\\sigma^2}{1 - \\lambda^2}.\n",
"\\end{equation}\n",
"\n",
"Notice that the value of this equilibrium variance depends on $\\lambda$ and $\\sigma$. It does not depend on $x_0$ and $x_\\infty$."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"To convince ourselves that things are behaving sensibly, let's compare the empirical variances of the equilibrium solution to the OU equations with the expected formula.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 3: Computing the variances empirically\n",
"\n",
"Write code to compute the analytical variance: $\\text{Var} = \\frac{\\sigma^2}{1 - \\lambda^2}$, and compare against the empirical variances (which is already provided for you using the helper function). You should see that they should be about equal to each other and lie close to the 45 degree ($y=x$) line. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "code",
"execution": {}
},
"outputs": [],
"source": [
"def ddm(T, x0, xinfty, lam, sig):\n",
" t = np.arange(0, T, 1.)\n",
" x = np.zeros_like(t)\n",
" x[0] = x0\n",
"\n",
" for k in range(len(t)-1):\n",
" x[k+1] = xinfty + lam * (x[k] - xinfty) + sig * np.random.standard_normal(size=1)\n",
"\n",
" return t, x\n",
"\n",
"\n",
"# computes equilibrium variance of ddm\n",
"# returns variance\n",
"def ddm_eq_var(T, x0, xinfty, lam, sig):\n",
" t, x = ddm(T, x0, xinfty, lam, sig)\n",
"\n",
" # returns variance of the second half of the simulation\n",
" # this is a hack: assumes system has settled by second half\n",
" return x[-round(T/2):].var()\n",
"\n",
"\n",
"np.random.seed(2020) # set random seed\n",
"\n",
"# sweep through values for lambda\n",
"lambdas = np.arange(0.05, 0.95, 0.01)\n",
"empirical_variances = np.zeros_like(lambdas)\n",
"analytical_variances = np.zeros_like(lambdas)\n",
"\n",
"sig = 0.87\n",
"\n",
"# compute empirical equilibrium variance\n",
"for i, lam in enumerate(lambdas):\n",
" empirical_variances[i] = ddm_eq_var(5000, x0, xinfty, lambdas[i], sig)\n",
"\n",
"##############################################################################\n",
"## Insert your code below to calculate the analytical variances\n",
"raise NotImplementedError(\"Student exercise: need to compute variances\")\n",
"##############################################################################\n",
"\n",
"# Hint: you can also do this in one line outside the loop!\n",
"analytical_variances = ...\n",
"\n",
"# Plot the empirical variance vs analytical variance\n",
"var_comparison_plot(empirical_variances, analytical_variances)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main//tutorials/W2D2_LinearSystems/solutions/W2D2_Tutorial3_Solution_d35fe99c.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Summary\n",
"\n",
"*Estimated timing of tutorial: 45 minutes*\n",
"\n",
"In this tutorial, we have built and observed OU systems, which have both deterministic and stochastic parts. We see that they behave, on average, similar to our expectations from analyzing deterministic dynamical systems. \n",
"\n",
"Importantly, **the interplay between the deterministic and stochastic parts** serve to _balance_ the tendency of purely stochastic processes (like the random walk) to increase in variance over time. This behavior is one of the properties of OU systems that make them popular choices for modeling cognitive functions, including short-term memory and decision-making."
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"include_colab_link": true,
"name": "W2D2_Tutorial3",
"provenance": [],
"toc_visible": true
},
"kernel": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}