Open In Colab   Open in Kaggle

Bonus Tutorial 4: The Kalman Filter, part 2

Week 3, Day 2: Hidden Dynamics

By Neuromatch Academy

Content creators: Caroline Haimerl and Byron Galbraith

Content reviewers: Jesse Livezey, Matt Krause, Michael Waskom, and Xaq Pitkow

Post-production team: Gagana B, Spiros Chavlis


Important note: This is bonus material, included from NMA 2020. It has not been substantially revised for 2021. This means that the notation and standards are slightly different. We include it here because it provides additional information about how the Kalman filter works in two dimensions.


Useful references:

  • Roweis, Ghahramani (1998): A unifying review of linear Gaussian Models

  • Bishop (2006): Pattern Recognition and Machine Learning


Acknowledgements:

This tutorial is in part based on code originally created by Caroline Haimerl for Dr. Cristina Savin’s Probabilistic Time Series class at the Center for Data Science, New York University

Video 1: Introduction

Video available at https://youtu.be/6f_51L3i5aQ

Tutorial Objectives

In the previous tutorial we gained intuition for the Kalman filter in one dimension. In this tutorial, we will examine the two-dimensional Kalman filter and more of its mathematical foundations.

In this tutorial, you will:

  • Review linear dynamical systems

  • Implement the Kalman filter

  • Explore how the Kalman filter can be used to smooth data from an eye-tracking experiment

import sys
!conda install -c conda-forge ipywidgets --yes
Collecting package metadata (current_repodata.json): - 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
done
Solving environment: | 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
done
==> WARNING: A newer version of conda exists. <==
  current version: 4.12.0
  latest version: 4.13.0

Please update conda by running

    $ conda update -n base -c defaults conda
## Package Plan ##

  environment location: /usr/share/miniconda

  added / updated specs:
    - ipywidgets


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    asttokens-2.0.5            |     pyhd8ed1ab_0          21 KB  conda-forge
    attrs-21.4.0               |     pyhd8ed1ab_0          49 KB  conda-forge
    backcall-0.2.0             |     pyh9f0ad1d_0          13 KB  conda-forge
    backports-1.0              |             py_2           4 KB  conda-forge
    backports.functools_lru_cache-1.6.4|     pyhd8ed1ab_0           9 KB  conda-forge
    bleach-5.0.0               |     pyhd8ed1ab_0         123 KB  conda-forge
    ca-certificates-2022.6.15  |       ha878542_0         149 KB  conda-forge
    certifi-2022.6.15          |   py39hf3d152e_0         155 KB  conda-forge
    conda-4.13.0               |   py39hf3d152e_1         998 KB  conda-forge
    decorator-5.1.1            |     pyhd8ed1ab_0          12 KB  conda-forge
    defusedxml-0.7.1           |     pyhd8ed1ab_0          23 KB  conda-forge
    entrypoints-0.4            |     pyhd8ed1ab_0           9 KB  conda-forge
    executing-0.8.3            |     pyhd8ed1ab_0          18 KB  conda-forge
    importlib-metadata-4.11.4  |   py39hf3d152e_0          33 KB  conda-forge
    importlib_resources-5.8.0  |     pyhd8ed1ab_0          22 KB  conda-forge
    ipykernel-5.5.5            |   py39hef51801_0         167 KB  conda-forge
    ipython-8.4.0              |   py39hf3d152e_0         1.1 MB  conda-forge
    ipython_genutils-0.2.0     |             py_1          21 KB  conda-forge
    ipywidgets-7.7.1           |     pyhd8ed1ab_0         103 KB  conda-forge
    jedi-0.18.1                |   py39hf3d152e_1         999 KB  conda-forge
    jinja2-3.1.2               |     pyhd8ed1ab_1          99 KB  conda-forge
    jsonschema-4.6.0           |     pyhd8ed1ab_0          62 KB  conda-forge
    jupyter_client-7.0.6       |     pyhd8ed1ab_0          87 KB  conda-forge
    jupyter_core-4.10.0        |   py39hf3d152e_0          81 KB  conda-forge
    jupyterlab_widgets-1.1.1   |     pyhd8ed1ab_0         133 KB  conda-forge
    libsodium-1.0.18           |       h36c2ea0_1         366 KB  conda-forge
    markupsafe-2.0.1           |   py39h3811e60_0          22 KB  conda-forge
    matplotlib-inline-0.1.3    |     pyhd8ed1ab_0          11 KB  conda-forge
    mistune-0.8.4              |py39h3811e60_1004          54 KB  conda-forge
    nbconvert-5.6.1            |     pyhd8ed1ab_2         373 KB  conda-forge
    nbformat-5.4.0             |     pyhd8ed1ab_0         104 KB  conda-forge
    nest-asyncio-1.5.5         |     pyhd8ed1ab_0           9 KB  conda-forge
    notebook-5.7.11            |   py39hf3d152e_0         7.9 MB  conda-forge
    openssl-1.1.1o             |       h7f8727e_0         2.5 MB
    packaging-21.3             |     pyhd8ed1ab_0          36 KB  conda-forge
    pandocfilters-1.5.0        |     pyhd8ed1ab_0          11 KB  conda-forge
    parso-0.8.3                |     pyhd8ed1ab_0          69 KB  conda-forge
    pexpect-4.8.0              |     pyh9f0ad1d_2          47 KB  conda-forge
    pickleshare-0.7.5          |          py_1003           9 KB  conda-forge
    prometheus_client-0.14.1   |     pyhd8ed1ab_0          49 KB  conda-forge
    prompt-toolkit-3.0.29      |     pyha770c72_0         252 KB  conda-forge
    ptyprocess-0.7.0           |     pyhd3deb0d_0          16 KB  conda-forge
    pure_eval-0.2.2            |     pyhd8ed1ab_0          14 KB  conda-forge
    pygments-2.12.0            |     pyhd8ed1ab_0         817 KB  conda-forge
    pyparsing-3.0.9            |     pyhd8ed1ab_0          79 KB  conda-forge
    pyrsistent-0.18.0          |   py39heee7806_0          94 KB
    python-dateutil-2.8.2      |     pyhd8ed1ab_0         240 KB  conda-forge
    python-fastjsonschema-2.15.3|     pyhd8ed1ab_0         243 KB  conda-forge
    python_abi-3.9             |           2_cp39           4 KB  conda-forge
    pyzmq-19.0.2               |   py39hb69f2a1_2         479 KB  conda-forge
    send2trash-1.8.0           |     pyhd8ed1ab_0          17 KB  conda-forge
    stack_data-0.3.0           |     pyhd8ed1ab_0          23 KB  conda-forge
    terminado-0.15.0           |   py39hf3d152e_0          28 KB  conda-forge
    testpath-0.6.0             |     pyhd8ed1ab_0          85 KB  conda-forge
    tornado-6.1                |   py39h3811e60_1         646 KB  conda-forge
    traitlets-5.3.0            |     pyhd8ed1ab_0          85 KB  conda-forge
    wcwidth-0.2.5              |     pyh9f0ad1d_2          33 KB  conda-forge
    webencodings-0.5.1         |             py_1          12 KB  conda-forge
    widgetsnbextension-3.6.1   |     pyha770c72_0         1.2 MB  conda-forge
    zeromq-4.3.4               |       h9c3ff4c_0         352 KB  conda-forge
    zipp-3.8.0                 |     pyhd8ed1ab_0          12 KB  conda-forge
    ------------------------------------------------------------
                                           Total:        20.6 MB

The following NEW packages will be INSTALLED:

  asttokens          conda-forge/noarch::asttokens-2.0.5-pyhd8ed1ab_0
  attrs              conda-forge/noarch::attrs-21.4.0-pyhd8ed1ab_0
  backcall           conda-forge/noarch::backcall-0.2.0-pyh9f0ad1d_0
  backports          conda-forge/noarch::backports-1.0-py_2
  backports.functoo~ conda-forge/noarch::backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0
  bleach             conda-forge/noarch::bleach-5.0.0-pyhd8ed1ab_0
  decorator          conda-forge/noarch::decorator-5.1.1-pyhd8ed1ab_0
  defusedxml         conda-forge/noarch::defusedxml-0.7.1-pyhd8ed1ab_0
  entrypoints        conda-forge/noarch::entrypoints-0.4-pyhd8ed1ab_0
  executing          conda-forge/noarch::executing-0.8.3-pyhd8ed1ab_0
  importlib-metadata conda-forge/linux-64::importlib-metadata-4.11.4-py39hf3d152e_0
  importlib_resourc~ conda-forge/noarch::importlib_resources-5.8.0-pyhd8ed1ab_0
  ipykernel          conda-forge/linux-64::ipykernel-5.5.5-py39hef51801_0
  ipython            conda-forge/linux-64::ipython-8.4.0-py39hf3d152e_0
  ipython_genutils   conda-forge/noarch::ipython_genutils-0.2.0-py_1
  ipywidgets         conda-forge/noarch::ipywidgets-7.7.1-pyhd8ed1ab_0
  jedi               conda-forge/linux-64::jedi-0.18.1-py39hf3d152e_1
  jinja2             conda-forge/noarch::jinja2-3.1.2-pyhd8ed1ab_1
  jsonschema         conda-forge/noarch::jsonschema-4.6.0-pyhd8ed1ab_0
  jupyter_client     conda-forge/noarch::jupyter_client-7.0.6-pyhd8ed1ab_0
  jupyter_core       conda-forge/linux-64::jupyter_core-4.10.0-py39hf3d152e_0
  jupyterlab_widgets conda-forge/noarch::jupyterlab_widgets-1.1.1-pyhd8ed1ab_0
  libsodium          conda-forge/linux-64::libsodium-1.0.18-h36c2ea0_1
  markupsafe         conda-forge/linux-64::markupsafe-2.0.1-py39h3811e60_0
  matplotlib-inline  conda-forge/noarch::matplotlib-inline-0.1.3-pyhd8ed1ab_0
  mistune            conda-forge/linux-64::mistune-0.8.4-py39h3811e60_1004
  nbconvert          conda-forge/noarch::nbconvert-5.6.1-pyhd8ed1ab_2
  nbformat           conda-forge/noarch::nbformat-5.4.0-pyhd8ed1ab_0
  nest-asyncio       conda-forge/noarch::nest-asyncio-1.5.5-pyhd8ed1ab_0
  notebook           conda-forge/linux-64::notebook-5.7.11-py39hf3d152e_0
  packaging          conda-forge/noarch::packaging-21.3-pyhd8ed1ab_0
  pandocfilters      conda-forge/noarch::pandocfilters-1.5.0-pyhd8ed1ab_0
  parso              conda-forge/noarch::parso-0.8.3-pyhd8ed1ab_0
  pexpect            conda-forge/noarch::pexpect-4.8.0-pyh9f0ad1d_2
  pickleshare        conda-forge/noarch::pickleshare-0.7.5-py_1003
  prometheus_client  conda-forge/noarch::prometheus_client-0.14.1-pyhd8ed1ab_0
  prompt-toolkit     conda-forge/noarch::prompt-toolkit-3.0.29-pyha770c72_0
  ptyprocess         conda-forge/noarch::ptyprocess-0.7.0-pyhd3deb0d_0
  pure_eval          conda-forge/noarch::pure_eval-0.2.2-pyhd8ed1ab_0
  pygments           conda-forge/noarch::pygments-2.12.0-pyhd8ed1ab_0
  pyparsing          conda-forge/noarch::pyparsing-3.0.9-pyhd8ed1ab_0
  pyrsistent         pkgs/main/linux-64::pyrsistent-0.18.0-py39heee7806_0
  python-dateutil    conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
  python-fastjsonsc~ conda-forge/noarch::python-fastjsonschema-2.15.3-pyhd8ed1ab_0
  python_abi         conda-forge/linux-64::python_abi-3.9-2_cp39
  pyzmq              conda-forge/linux-64::pyzmq-19.0.2-py39hb69f2a1_2
  send2trash         conda-forge/noarch::send2trash-1.8.0-pyhd8ed1ab_0
  stack_data         conda-forge/noarch::stack_data-0.3.0-pyhd8ed1ab_0
  terminado          conda-forge/linux-64::terminado-0.15.0-py39hf3d152e_0
  testpath           conda-forge/noarch::testpath-0.6.0-pyhd8ed1ab_0
  tornado            conda-forge/linux-64::tornado-6.1-py39h3811e60_1
  traitlets          conda-forge/noarch::traitlets-5.3.0-pyhd8ed1ab_0
  wcwidth            conda-forge/noarch::wcwidth-0.2.5-pyh9f0ad1d_2
  webencodings       conda-forge/noarch::webencodings-0.5.1-py_1
  widgetsnbextension conda-forge/noarch::widgetsnbextension-3.6.1-pyha770c72_0
  zeromq             conda-forge/linux-64::zeromq-4.3.4-h9c3ff4c_0
  zipp               conda-forge/noarch::zipp-3.8.0-pyhd8ed1ab_0

The following packages will be UPDATED:

  ca-certificates    pkgs/main::ca-certificates-2022.3.29-~ --> conda-forge::ca-certificates-2022.6.15-ha878542_0
  certifi            pkgs/main::certifi-2021.10.8-py39h06a~ --> conda-forge::certifi-2022.6.15-py39hf3d152e_0
  conda              pkgs/main::conda-4.12.0-py39h06a4308_0 --> conda-forge::conda-4.13.0-py39hf3d152e_1
  openssl                                 1.1.1n-h7f8727e_0 --> 1.1.1o-h7f8727e_0



Downloading and Extracting Packages

pyrsistent-0.18.0    | 94 KB     |                                       |   0% 
pyrsistent-0.18.0    | 94 KB     | ##################################### | 100% 
pyrsistent-0.18.0    | 94 KB     | ##################################### | 100% 

prometheus_client-0. | 49 KB     |                                       |   0% 
prometheus_client-0. | 49 KB     | ##################################### | 100% 

webencodings-0.5.1   | 12 KB     |                                       |   0% 
webencodings-0.5.1   | 12 KB     | ##################################### | 100% 

parso-0.8.3          | 69 KB     |                                       |   0% 
parso-0.8.3          | 69 KB     | ##################################### | 100% 

bleach-5.0.0         | 123 KB    |                                       |   0% 
bleach-5.0.0         | 123 KB    | ##################################### | 100% 

entrypoints-0.4      | 9 KB      |                                       |   0% 
entrypoints-0.4      | 9 KB      | ##################################### | 100% 

backports.functools_ | 9 KB      |                                       |   0% 
backports.functools_ | 9 KB      | ##################################### | 100% 

openssl-1.1.1o       | 2.5 MB    |                                       |   0% 
openssl-1.1.1o       | 2.5 MB    | ##################################### | 100% 
openssl-1.1.1o       | 2.5 MB    | ##################################### | 100% 

ipython-8.4.0        | 1.1 MB    |                                       |   0% 
ipython-8.4.0        | 1.1 MB    | ##################################### | 100% 
ipython-8.4.0        | 1.1 MB    | ##################################### | 100% 

prompt-toolkit-3.0.2 | 252 KB    |                                       |   0% 
prompt-toolkit-3.0.2 | 252 KB    | ##################################### | 100% 

jupyter_client-7.0.6 | 87 KB     |                                       |   0% 
jupyter_client-7.0.6 | 87 KB     | ##################################### | 100% 

executing-0.8.3      | 18 KB     |                                       |   0% 
executing-0.8.3      | 18 KB     | ##################################### | 100% 

nbconvert-5.6.1      | 373 KB    |                                       |   0% 
nbconvert-5.6.1      | 373 KB    | ##################################### | 100% 
nbconvert-5.6.1      | 373 KB    | ##################################### | 100% 

jedi-0.18.1          | 999 KB    |                                       |   0% 
jedi-0.18.1          | 999 KB    | ##################################### | 100% 
jedi-0.18.1          | 999 KB    | ##################################### | 100% 

terminado-0.15.0     | 28 KB     |                                       |   0% 
terminado-0.15.0     | 28 KB     | ##################################### | 100% 

nbformat-5.4.0       | 104 KB    |                                       |   0% 
nbformat-5.4.0       | 104 KB    | ##################################### | 100% 

python-dateutil-2.8. | 240 KB    |                                       |   0% 
python-dateutil-2.8. | 240 KB    | ##################################### | 100% 

packaging-21.3       | 36 KB     |                                       |   0% 
packaging-21.3       | 36 KB     | ##################################### | 100% 

jinja2-3.1.2         | 99 KB     |                                       |   0% 
jinja2-3.1.2         | 99 KB     | ##################################### | 100% 

wcwidth-0.2.5        | 33 KB     |                                       |   0% 
wcwidth-0.2.5        | 33 KB     | ##################################### | 100% 

backports-1.0        | 4 KB      |                                       |   0% 
backports-1.0        | 4 KB      | ##################################### | 100% 

jupyterlab_widgets-1 | 133 KB    |                                       |   0% 
jupyterlab_widgets-1 | 133 KB    | ##################################### | 100% 

markupsafe-2.0.1     | 22 KB     |                                       |   0% 
markupsafe-2.0.1     | 22 KB     | ##################################### | 100% 

attrs-21.4.0         | 49 KB     |                                       |   0% 
attrs-21.4.0         | 49 KB     | ##################################### | 100% 

python-fastjsonschem | 243 KB    |                                       |   0% 
python-fastjsonschem | 243 KB    | ##################################### | 100% 
python-fastjsonschem | 243 KB    | ##################################### | 100% 

traitlets-5.3.0      | 85 KB     |                                       |   0% 
traitlets-5.3.0      | 85 KB     | ##################################### | 100% 

pexpect-4.8.0        | 47 KB     |                                       |   0% 
pexpect-4.8.0        | 47 KB     | ##################################### | 100% 

stack_data-0.3.0     | 23 KB     |                                       |   0% 
stack_data-0.3.0     | 23 KB     | ##################################### | 100% 

pickleshare-0.7.5    | 9 KB      |                                       |   0% 
pickleshare-0.7.5    | 9 KB      | ##################################### | 100% 

decorator-5.1.1      | 12 KB     |                                       |   0% 
decorator-5.1.1      | 12 KB     | ##################################### | 100% 

defusedxml-0.7.1     | 23 KB     |                                       |   0% 
defusedxml-0.7.1     | 23 KB     | ##################################### | 100% 

tornado-6.1          | 646 KB    |                                       |   0% 
tornado-6.1          | 646 KB    | ##################################### | 100% 
tornado-6.1          | 646 KB    | ##################################### | 100% 

widgetsnbextension-3 | 1.2 MB    |                                       |   0% 
widgetsnbextension-3 | 1.2 MB    | ##################################### | 100% 
widgetsnbextension-3 | 1.2 MB    | ##################################### | 100% 

testpath-0.6.0       | 85 KB     |                                       |   0% 
testpath-0.6.0       | 85 KB     | ##################################### | 100% 

zipp-3.8.0           | 12 KB     |                                       |   0% 
zipp-3.8.0           | 12 KB     | ##################################### | 100% 

python_abi-3.9       | 4 KB      |                                       |   0% 
python_abi-3.9       | 4 KB      | ##################################### | 100% 

certifi-2022.6.15    | 155 KB    |                                       |   0% 
certifi-2022.6.15    | 155 KB    | ##################################### | 100% 

pyzmq-19.0.2         | 479 KB    |                                       |   0% 
pyzmq-19.0.2         | 479 KB    | ##################################### | 100% 
pyzmq-19.0.2         | 479 KB    | ##################################### | 100% 

matplotlib-inline-0. | 11 KB     |                                       |   0% 
matplotlib-inline-0. | 11 KB     | ##################################### | 100% 

jsonschema-4.6.0     | 62 KB     |                                       |   0% 
jsonschema-4.6.0     | 62 KB     | ##################################### | 100% 

notebook-5.7.11      | 7.9 MB    |                                       |   0% 
notebook-5.7.11      | 7.9 MB    | ##################################### | 100% 
notebook-5.7.11      | 7.9 MB    | ##################################### | 100% 

importlib-metadata-4 | 33 KB     |                                       |   0% 
importlib-metadata-4 | 33 KB     | ##################################### | 100% 

zeromq-4.3.4         | 352 KB    |                                       |   0% 
zeromq-4.3.4         | 352 KB    | ##################################### | 100% 
zeromq-4.3.4         | 352 KB    | ##################################### | 100% 

backcall-0.2.0       | 13 KB     |                                       |   0% 
backcall-0.2.0       | 13 KB     | ##################################### | 100% 

ipywidgets-7.7.1     | 103 KB    |                                       |   0% 
ipywidgets-7.7.1     | 103 KB    | ##################################### | 100% 

ptyprocess-0.7.0     | 16 KB     |                                       |   0% 
ptyprocess-0.7.0     | 16 KB     | ##################################### | 100% 

pandocfilters-1.5.0  | 11 KB     |                                       |   0% 
pandocfilters-1.5.0  | 11 KB     | ##################################### | 100% 

libsodium-1.0.18     | 366 KB    |                                       |   0% 
libsodium-1.0.18     | 366 KB    | ##################################### | 100% 

importlib_resources- | 22 KB     |                                       |   0% 
importlib_resources- | 22 KB     | ##################################### | 100% 

mistune-0.8.4        | 54 KB     |                                       |   0% 
mistune-0.8.4        | 54 KB     | ##################################### | 100% 

send2trash-1.8.0     | 17 KB     |                                       |   0% 
send2trash-1.8.0     | 17 KB     | ##################################### | 100% 

nest-asyncio-1.5.5   | 9 KB      |                                       |   0% 
nest-asyncio-1.5.5   | 9 KB      | ##################################### | 100% 

pygments-2.12.0      | 817 KB    |                                       |   0% 
pygments-2.12.0      | 817 KB    | ##################################### | 100% 
pygments-2.12.0      | 817 KB    | ##################################### | 100% 

jupyter_core-4.10.0  | 81 KB     |                                       |   0% 
jupyter_core-4.10.0  | 81 KB     | ##################################### | 100% 
conda-4.13.0         | 998 KB    |                                       |   0% 
conda-4.13.0         | 998 KB    | ##################################### | 100% 
conda-4.13.0         | 998 KB    | ##################################### | 100% 

asttokens-2.0.5      | 21 KB     |                                       |   0% 
asttokens-2.0.5      | 21 KB     | ##################################### | 100% 

ipykernel-5.5.5      | 167 KB    |                                       |   0% 
ipykernel-5.5.5      | 167 KB    | ##################################### | 100% 

pure_eval-0.2.2      | 14 KB     |                                       |   0% 
pure_eval-0.2.2      | 14 KB     | ##################################### | 100% 

ca-certificates-2022 | 149 KB    |                                       |   0% 
ca-certificates-2022 | 149 KB    | ##################################### | 100% 

pyparsing-3.0.9      | 79 KB     |                                       |   0% 
pyparsing-3.0.9      | 79 KB     | ##################################### | 100% 

ipython_genutils-0.2 | 21 KB     |                                       |   0% 
ipython_genutils-0.2 | 21 KB     | ##################################### | 100% 
Preparing transaction: / 
- 
\ 
done
Verifying transaction: / 
- 
\ 
| 
/ 
done
Executing transaction: \ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
done
!conda install numpy matplotlib scipy requests --yes
Collecting package metadata (current_repodata.json): - 
\ 
| 
/ 
- 
\ 
done
Solving environment: / 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
done
## Package Plan ##

  environment location: /usr/share/miniconda

  added / updated specs:
    - matplotlib
    - numpy
    - requests
    - scipy


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    blas-1.0                   |              mkl           6 KB
    brotli-1.0.9               |       he6710b0_2         375 KB
    ca-certificates-2022.4.26  |       h06a4308_0         124 KB
    certifi-2022.6.15          |   py39h06a4308_0         153 KB
    conda-4.13.0               |   py39h06a4308_0         895 KB
    cycler-0.11.0              |     pyhd3eb1b0_0          12 KB
    dbus-1.13.18               |       hb2f20db_0         504 KB
    expat-2.4.4                |       h295c915_0         169 KB
    fontconfig-2.13.1          |       h6c09931_0         250 KB
    fonttools-4.25.0           |     pyhd3eb1b0_0         632 KB
    freetype-2.11.0            |       h70c0345_0         618 KB
    giflib-5.2.1               |       h7b6447c_0          78 KB
    glib-2.69.1                |       h4ff587b_1         1.7 MB
    gst-plugins-base-1.14.0    |       h8213a91_2         4.9 MB
    gstreamer-1.14.0           |       h28cd5cc_2         3.2 MB
    icu-58.2                   |       he6710b0_3        10.5 MB
    intel-openmp-2021.4.0      |    h06a4308_3561         4.2 MB
    jpeg-9e                    |       h7f8727e_0         240 KB
    kiwisolver-1.4.2           |   py39h295c915_0          83 KB
    lcms2-2.12                 |       h3be6417_0         312 KB
    libgfortran-ng-7.5.0       |      ha8ba4b0_17          22 KB
    libgfortran4-7.5.0         |      ha8ba4b0_17         995 KB
    libpng-1.6.37              |       hbc83047_0         278 KB
    libtiff-4.2.0              |       h2818925_1         452 KB
    libuuid-1.0.3              |       h7f8727e_2          17 KB
    libwebp-1.2.2              |       h55f646e_0          80 KB
    libwebp-base-1.2.2         |       h7f8727e_0         440 KB
    libxcb-1.15                |       h7f8727e_0         505 KB
    libxml2-2.9.14             |       h74e7548_0         718 KB
    lz4-c-1.9.3                |       h295c915_1         185 KB
    matplotlib-3.5.1           |   py39h06a4308_1          29 KB
    matplotlib-base-3.5.1      |   py39ha18d171_1         5.7 MB
    mkl-2021.4.0               |     h06a4308_640       142.6 MB
    mkl-service-2.4.0          |   py39h7f8727e_0          59 KB
    mkl_fft-1.3.1              |   py39hd3c417c_0         182 KB
    mkl_random-1.2.2           |   py39h51133e4_0         309 KB
    munkres-1.1.4              |             py_0          13 KB
    numpy-1.22.3               |   py39he7a7128_0          10 KB
    numpy-base-1.22.3          |   py39hf524024_0         5.4 MB
    pcre-8.45                  |       h295c915_0         207 KB
    pillow-9.0.1               |   py39h22f2fdc_0         669 KB
    pyqt-5.9.2                 |   py39h2531618_6         4.7 MB
    qt-5.9.7                   |       h5867ecd_1        68.5 MB
    scipy-1.7.3                |   py39hc147768_0        16.9 MB
    sip-4.19.13                |   py39h295c915_0         278 KB
    zstd-1.5.2                 |       ha4553b6_0         488 KB
    ------------------------------------------------------------
                                           Total:       278.4 MB

The following NEW packages will be INSTALLED:

  blas               pkgs/main/linux-64::blas-1.0-mkl
  brotli             pkgs/main/linux-64::brotli-1.0.9-he6710b0_2
  cycler             pkgs/main/noarch::cycler-0.11.0-pyhd3eb1b0_0
  dbus               pkgs/main/linux-64::dbus-1.13.18-hb2f20db_0
  expat              pkgs/main/linux-64::expat-2.4.4-h295c915_0
  fontconfig         pkgs/main/linux-64::fontconfig-2.13.1-h6c09931_0
  fonttools          pkgs/main/noarch::fonttools-4.25.0-pyhd3eb1b0_0
  freetype           pkgs/main/linux-64::freetype-2.11.0-h70c0345_0
  giflib             pkgs/main/linux-64::giflib-5.2.1-h7b6447c_0
  glib               pkgs/main/linux-64::glib-2.69.1-h4ff587b_1
  gst-plugins-base   pkgs/main/linux-64::gst-plugins-base-1.14.0-h8213a91_2
  gstreamer          pkgs/main/linux-64::gstreamer-1.14.0-h28cd5cc_2
  icu                pkgs/main/linux-64::icu-58.2-he6710b0_3
  intel-openmp       pkgs/main/linux-64::intel-openmp-2021.4.0-h06a4308_3561
  jpeg               pkgs/main/linux-64::jpeg-9e-h7f8727e_0
  kiwisolver         pkgs/main/linux-64::kiwisolver-1.4.2-py39h295c915_0
  lcms2              pkgs/main/linux-64::lcms2-2.12-h3be6417_0
  libgfortran-ng     pkgs/main/linux-64::libgfortran-ng-7.5.0-ha8ba4b0_17
  libgfortran4       pkgs/main/linux-64::libgfortran4-7.5.0-ha8ba4b0_17
  libpng             pkgs/main/linux-64::libpng-1.6.37-hbc83047_0
  libtiff            pkgs/main/linux-64::libtiff-4.2.0-h2818925_1
  libuuid            pkgs/main/linux-64::libuuid-1.0.3-h7f8727e_2
  libwebp            pkgs/main/linux-64::libwebp-1.2.2-h55f646e_0
  libwebp-base       pkgs/main/linux-64::libwebp-base-1.2.2-h7f8727e_0
  libxcb             pkgs/main/linux-64::libxcb-1.15-h7f8727e_0
  libxml2            pkgs/main/linux-64::libxml2-2.9.14-h74e7548_0
  lz4-c              pkgs/main/linux-64::lz4-c-1.9.3-h295c915_1
  matplotlib         pkgs/main/linux-64::matplotlib-3.5.1-py39h06a4308_1
  matplotlib-base    pkgs/main/linux-64::matplotlib-base-3.5.1-py39ha18d171_1
  mkl                pkgs/main/linux-64::mkl-2021.4.0-h06a4308_640
  mkl-service        pkgs/main/linux-64::mkl-service-2.4.0-py39h7f8727e_0
  mkl_fft            pkgs/main/linux-64::mkl_fft-1.3.1-py39hd3c417c_0
  mkl_random         pkgs/main/linux-64::mkl_random-1.2.2-py39h51133e4_0
  munkres            pkgs/main/noarch::munkres-1.1.4-py_0
  numpy              pkgs/main/linux-64::numpy-1.22.3-py39he7a7128_0
  numpy-base         pkgs/main/linux-64::numpy-base-1.22.3-py39hf524024_0
  pcre               pkgs/main/linux-64::pcre-8.45-h295c915_0
  pillow             pkgs/main/linux-64::pillow-9.0.1-py39h22f2fdc_0
  pyqt               pkgs/main/linux-64::pyqt-5.9.2-py39h2531618_6
  qt                 pkgs/main/linux-64::qt-5.9.7-h5867ecd_1
  scipy              pkgs/main/linux-64::scipy-1.7.3-py39hc147768_0
  sip                pkgs/main/linux-64::sip-4.19.13-py39h295c915_0
  zstd               pkgs/main/linux-64::zstd-1.5.2-ha4553b6_0

The following packages will be SUPERSEDED by a higher-priority channel:

  ca-certificates    conda-forge::ca-certificates-2022.6.1~ --> pkgs/main::ca-certificates-2022.4.26-h06a4308_0
  certifi            conda-forge::certifi-2022.6.15-py39hf~ --> pkgs/main::certifi-2022.6.15-py39h06a4308_0
  conda              conda-forge::conda-4.13.0-py39hf3d152~ --> pkgs/main::conda-4.13.0-py39h06a4308_0



Downloading and Extracting Packages

matplotlib-3.5.1     | 29 KB     |                                       |   0% 
matplotlib-3.5.1     | 29 KB     | ##################################### | 100% 

libgfortran4-7.5.0   | 995 KB    |                                       |   0% 
libgfortran4-7.5.0   | 995 KB    | ##################################### | 100% 

intel-openmp-2021.4. | 4.2 MB    |                                       |   0% 
intel-openmp-2021.4. | 4.2 MB    | ##################################### | 100% 
intel-openmp-2021.4. | 4.2 MB    | ##################################### | 100% 

ca-certificates-2022 | 124 KB    |                                       |   0% 
ca-certificates-2022 | 124 KB    | ##################################### | 100% 

conda-4.13.0         | 895 KB    |                                       |   0% 
conda-4.13.0         | 895 KB    | ##################################### | 100% 

brotli-1.0.9         | 375 KB    |                                       |   0% 
brotli-1.0.9         | 375 KB    | ##################################### | 100% 

numpy-base-1.22.3    | 5.4 MB    |                                       |   0% 
numpy-base-1.22.3    | 5.4 MB    | ##################################### | 100% 
numpy-base-1.22.3    | 5.4 MB    | ##################################### | 100% 

cycler-0.11.0        | 12 KB     |                                       |   0% 
cycler-0.11.0        | 12 KB     | ##################################### | 100% 

libxml2-2.9.14       | 718 KB    |                                       |   0% 
libxml2-2.9.14       | 718 KB    | ##################################### | 100% 

icu-58.2             | 10.5 MB   |                                       |   0% 
icu-58.2             | 10.5 MB   | ###########################6          |  75% 
icu-58.2             | 10.5 MB   | ##################################### | 100% 

sip-4.19.13          | 278 KB    |                                       |   0% 
sip-4.19.13          | 278 KB    | ##################################### | 100% 

fonttools-4.25.0     | 632 KB    |                                       |   0% 
fonttools-4.25.0     | 632 KB    | ##################################### | 100% 

libwebp-base-1.2.2   | 440 KB    |                                       |   0% 
libwebp-base-1.2.2   | 440 KB    | ##################################### | 100% 

pyqt-5.9.2           | 4.7 MB    |                                       |   0% 
pyqt-5.9.2           | 4.7 MB    | ##################################### | 100% 
pyqt-5.9.2           | 4.7 MB    | ##################################### | 100% 

jpeg-9e              | 240 KB    |                                       |   0% 
jpeg-9e              | 240 KB    | ##################################### | 100% 

blas-1.0             | 6 KB      |                                       |   0% 
blas-1.0             | 6 KB      | ##################################### | 100% 

munkres-1.1.4        | 13 KB     |                                       |   0% 
munkres-1.1.4        | 13 KB     | ##################################### | 100% 

freetype-2.11.0      | 618 KB    |                                       |   0% 
freetype-2.11.0      | 618 KB    | ##################################### | 100% 

expat-2.4.4          | 169 KB    |                                       |   0% 
expat-2.4.4          | 169 KB    | ##################################### | 100% 

lcms2-2.12           | 312 KB    |                                       |   0% 
lcms2-2.12           | 312 KB    | ##################################### | 100% 

mkl_random-1.2.2     | 309 KB    |                                       |   0% 
mkl_random-1.2.2     | 309 KB    | ##################################### | 100% 

gst-plugins-base-1.1 | 4.9 MB    |                                       |   0% 
gst-plugins-base-1.1 | 4.9 MB    | ##################################### | 100% 
gst-plugins-base-1.1 | 4.9 MB    | ##################################### | 100% 

libgfortran-ng-7.5.0 | 22 KB     |                                       |   0% 
libgfortran-ng-7.5.0 | 22 KB     | ##################################### | 100% 

numpy-1.22.3         | 10 KB     |                                       |   0% 
numpy-1.22.3         | 10 KB     | ##################################### | 100% 

pcre-8.45            | 207 KB    |                                       |   0% 
pcre-8.45            | 207 KB    | ##################################### | 100% 

mkl-2021.4.0         | 142.6 MB  |                                       |   0% 
mkl-2021.4.0         | 142.6 MB  | #6                                    |   5% 
mkl-2021.4.0         | 142.6 MB  | ####5                                 |  12% 
mkl-2021.4.0         | 142.6 MB  | ######9                               |  19% 
mkl-2021.4.0         | 142.6 MB  | ##########                            |  27% 
mkl-2021.4.0         | 142.6 MB  | ############6                         |  34% 
mkl-2021.4.0         | 142.6 MB  | ###############4                      |  42% 
mkl-2021.4.0         | 142.6 MB  | ##################1                   |  49% 
mkl-2021.4.0         | 142.6 MB  | ####################8                 |  56% 
mkl-2021.4.0         | 142.6 MB  | #######################5              |  64% 
mkl-2021.4.0         | 142.6 MB  | ##########################            |  70% 
mkl-2021.4.0         | 142.6 MB  | ############################4         |  77% 
mkl-2021.4.0         | 142.6 MB  | ##############################7       |  83% 
mkl-2021.4.0         | 142.6 MB  | #################################2    |  90% 
mkl-2021.4.0         | 142.6 MB  | ###################################7  |  97% 
mkl-2021.4.0         | 142.6 MB  | ##################################### | 100% 

libuuid-1.0.3        | 17 KB     |                                       |   0% 
libuuid-1.0.3        | 17 KB     | ##################################### | 100% 
libuuid-1.0.3        | 17 KB     | ##################################### | 100% 

certifi-2022.6.15    | 153 KB    |                                       |   0% 
certifi-2022.6.15    | 153 KB    | ##################################### | 100% 
certifi-2022.6.15    | 153 KB    | ##################################### | 100% 

libtiff-4.2.0        | 452 KB    |                                       |   0% 
libtiff-4.2.0        | 452 KB    | ##################################### | 100% 
libtiff-4.2.0        | 452 KB    | ##################################### | 100% 

lz4-c-1.9.3          | 185 KB    |                                       |   0% 
lz4-c-1.9.3          | 185 KB    | ##################################### | 100% 

pillow-9.0.1         | 669 KB    |                                       |   0% 
pillow-9.0.1         | 669 KB    | ##################################### | 100% 

mkl_fft-1.3.1        | 182 KB    |                                       |   0% 
mkl_fft-1.3.1        | 182 KB    | ##################################### | 100% 

gstreamer-1.14.0     | 3.2 MB    |                                       |   0% 
gstreamer-1.14.0     | 3.2 MB    | ##################################### | 100% 
gstreamer-1.14.0     | 3.2 MB    | ##################################### | 100% 
dbus-1.13.18         | 504 KB    |                                       |   0% 
dbus-1.13.18         | 504 KB    | ##################################### | 100% 

libpng-1.6.37        | 278 KB    |                                       |   0% 
libpng-1.6.37        | 278 KB    | ##################################### | 100% 

qt-5.9.7             | 68.5 MB   |                                       |   0% 
qt-5.9.7             | 68.5 MB   | #####8                                |  16% 
qt-5.9.7             | 68.5 MB   | ##############8                       |  40% 
qt-5.9.7             | 68.5 MB   | #######################8              |  64% 
qt-5.9.7             | 68.5 MB   | ################################6     |  88% 
qt-5.9.7             | 68.5 MB   | ##################################### | 100% 

scipy-1.7.3          | 16.9 MB   |                                       |   0% 
scipy-1.7.3          | 16.9 MB   | ######################2               |  60% 
scipy-1.7.3          | 16.9 MB   | ##################################### | 100% 

zstd-1.5.2           | 488 KB    |                                       |   0% 
zstd-1.5.2           | 488 KB    | ##################################### | 100% 

libwebp-1.2.2        | 80 KB     |                                       |   0% 
libwebp-1.2.2        | 80 KB     | ##################################### | 100% 

libxcb-1.15          | 505 KB    |                                       |   0% 
libxcb-1.15          | 505 KB    | ##################################### | 100% 

kiwisolver-1.4.2     | 83 KB     |                                       |   0% 
kiwisolver-1.4.2     | 83 KB     | ##################################### | 100% 

glib-2.69.1          | 1.7 MB    |                                       |   0% 
glib-2.69.1          | 1.7 MB    | ##################################### | 100% 

mkl-service-2.4.0    | 59 KB     |                                       |   0% 
mkl-service-2.4.0    | 59 KB     | ##################################### | 100% 

giflib-5.2.1         | 78 KB     |                                       |   0% 
giflib-5.2.1         | 78 KB     | ##################################### | 100% 

matplotlib-base-3.5. | 5.7 MB    |                                       |   0% 
matplotlib-base-3.5. | 5.7 MB    | ##################################### | 100% 
matplotlib-base-3.5. | 5.7 MB    | ##################################### | 100% 

fontconfig-2.13.1    | 250 KB    |                                       |   0% 
fontconfig-2.13.1    | 250 KB    | ##################################### | 100% 
fontconfig-2.13.1    | 250 KB    | ##################################### | 100% 
Preparing transaction: \ 
| 
/ 
- 
\ 
| 
done
Verifying transaction: - 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
done
Executing transaction: \ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
/ 
- 
\ 
| 
done
# Install PyKalman (https://pykalman.github.io/)
!pip install pykalman --quiet

# Imports
import numpy as np
import matplotlib.pyplot as plt
import pykalman
from scipy import stats

Figure settings

#@title Figure settings
import ipywidgets as widgets       # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")

Data retrieval and loading

#@title Data retrieval and loading
import io
import os
import hashlib
import requests

fname = "W2D3_mit_eyetracking_2009.npz"
url = "https://osf.io/jfk8w/download"
expected_md5 = "20c7bc4a6f61f49450997e381cf5e0dd"

if not os.path.isfile(fname):
  try:
    r = requests.get(url)
  except requests.ConnectionError:
    print("!!! Failed to download data !!!")
  else:
    if r.status_code != requests.codes.ok:
      print("!!! Failed to download data !!!")
    elif hashlib.md5(r.content).hexdigest() != expected_md5:
      print("!!! Data download appears corrupted !!!")
    else:
      with open(fname, "wb") as fid:
        fid.write(r.content)

def load_eyetracking_data(data_fname=fname):

  with np.load(data_fname, allow_pickle=True) as dobj:
    data = dict(**dobj)

  images = [plt.imread(io.BytesIO(stim), format='JPG')
            for stim in data['stimuli']]
  subjects = data['subjects']

  return subjects, images

Helper functions

#@title Helper functions
np.set_printoptions(precision=3)


def plot_kalman(state, observation, estimate=None, label='filter', color='r-',
                title='LDS', axes=None):
    if axes is None:
      fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 6))
      ax1.plot(state[:, 0], state[:, 1], 'g-', label='true latent')
      ax1.plot(observation[:, 0], observation[:, 1], 'k.', label='data')
    else:
      ax1, ax2 = axes

    if estimate is not None:
      ax1.plot(estimate[:, 0], estimate[:, 1], color=color, label=label)
    ax1.set(title=title, xlabel='X position', ylabel='Y position')
    ax1.legend()

    if estimate is None:
      ax2.plot(state[:, 0], observation[:, 0], '.k', label='dim 1')
      ax2.plot(state[:, 1], observation[:, 1], '.', color='grey', label='dim 2')
      ax2.set(title='correlation', xlabel='latent', ylabel='measured')
    else:
      ax2.plot(state[:, 0], estimate[:, 0], '.', color=color,
               label='latent dim 1')
      ax2.plot(state[:, 1], estimate[:, 1], 'x', color=color,
               label='latent dim 2')
      ax2.set(title='correlation',
              xlabel='real latent',
              ylabel='estimated latent')
    ax2.legend()

    return ax1, ax2


def plot_gaze_data(data, img=None, ax=None):
    # overlay gaze on stimulus
    if ax is None:
        fig, ax = plt.subplots(figsize=(8, 6))

    xlim = None
    ylim = None
    if img is not None:
        ax.imshow(img, aspect='auto')
        ylim = (img.shape[0], 0)
        xlim = (0, img.shape[1])

    ax.scatter(data[:, 0], data[:, 1], c='m', s=100, alpha=0.7)
    ax.set(xlim=xlim, ylim=ylim)

    return ax


def plot_kf_state(kf, data, ax):
    mu_0 = np.ones(kf.n_dim_state)
    mu_0[:data.shape[1]] = data[0]
    kf.initial_state_mean = mu_0

    mu, sigma = kf.smooth(data)
    ax.plot(mu[:, 0], mu[:, 1], 'limegreen', linewidth=3, zorder=1)
    ax.scatter(mu[0, 0], mu[0, 1], c='orange', marker='>', s=200, zorder=2)
    ax.scatter(mu[-1, 0], mu[-1, 1], c='orange', marker='s', s=200, zorder=2)

Section 1: Linear Dynamical System (LDS)

Video 2: Linear Dynamical Systems

Video available at https://youtu.be/2SWh639YgEg

Kalman filter definitions:

The latent state \(s_t\) evolves as a stochastic linear dynamical system in discrete time, with a dynamics matrix \(D\):

(381)\[\begin{equation} s_t = Ds_{t-1}+w_t \end{equation}\]

Just as in the HMM, the structure is a Markov chain where the state at time point \(t\) is conditionally independent of previous states given the state at time point \(t-1\).

Sensory measurements \(m_t\) (observations) are noisy linear projections of the latent state:

(382)\[\begin{equation} m_t = Hs_{t}+\eta_t \end{equation}\]

Both states and measurements have Gaussian variability, often called noise: ‘process noise’ \(w_t\) for the states, and ‘measurement’ or ‘observation noise’ \(\eta_t\) for the measurements. The initial state is also Gaussian distributed. These quantites have means and covariances:

(383)\[\begin{eqnarray} w_t & \sim & \mathcal{N}(0, Q) \\ \eta_t & \sim & \mathcal{N}(0, R) \\ s_0 & \sim & \mathcal{N}(\mu_0, \Sigma_0) \end{eqnarray}\]

As a consequence, \(s_t\), \(m_t\) and their joint distributions are Gaussian. This makes all of the math analytically tractable using linear algebra, so we can easily compute the marginal and conditional distributions we will use for inferring the current state given the entire history of measurements.

Please note: we are trying to create uniform notation across tutorials. In some videos created in 2020, measurements \(m_t\) were denoted \(y_t\), and the Dynamics matrix \(D\) was denoted \(F\). We apologize for any confusion!

Section 1.1: Sampling from a latent linear dynamical system

The first thing we will investigate is how to generate timecourse samples from a linear dynamical system given its parameters. We will start by defining the following system:

# task dimensions
n_dim_state = 2
n_dim_obs = 2

# initialize model parameters
params = {
  'D': 0.9 * np.eye(n_dim_state),  # state transition matrix
  'Q': np.eye(n_dim_obs),  # state noise covariance
  'H': np.eye(n_dim_state),  # observation matrix
  'R': 1.0 * np.eye(n_dim_obs),  # observation noise covariance
  'mu_0': np.zeros(n_dim_state),  # initial state mean
  'sigma_0': 0.1 * np.eye(n_dim_state),  # initial state noise covariance
}

Coding note: We used a parameter dictionary params above. As the number of parameters we need to provide to our functions increases, it can be beneficial to condense them into a data structure like this to clean up the number of inputs we pass in. The trade-off is that we have to know what is in our data structure to use those values, rather than looking at the function signature directly.

Exercise 1: Sampling from a linear dynamical system

In this exercise you will implement the dynamics functions of a linear dynamical system to sample both a latent space trajectory (given parameters set above) and noisy measurements.

def sample_lds(n_timesteps, params, seed=0):
  """ Generate samples from a Linear Dynamical System specified by the provided
  parameters.

  Args:
  n_timesteps (int): the number of time steps to simulate
  params (dict): a dictionary of model parameters: (D, Q, H, R, mu_0, sigma_0)
  seed (int): a random seed to use for reproducibility checks

  Returns:
  ndarray, ndarray: the generated state and observation data
  """
  n_dim_state = params['D'].shape[0]
  n_dim_obs = params['H'].shape[0]

  # set seed
  np.random.seed(seed)

  # precompute random samples from the provided covariance matrices
  # mean defaults to 0
  mi = stats.multivariate_normal(cov=params['Q']).rvs(n_timesteps)
  eta = stats.multivariate_normal(cov=params['R']).rvs(n_timesteps)

  # initialize state and observation arrays
  state = np.zeros((n_timesteps, n_dim_state))
  obs = np.zeros((n_timesteps, n_dim_obs))

  ###################################################################
  ## TODO for students: compute the next state and observation values
  # Fill out function and remove
  raise NotImplementedError("Student exercise: compute the next state and observation values")
  ###################################################################

  # simulate the system
  for t in range(n_timesteps):
    # write the expressions for computing state values given the time step
    if t == 0:
      state[t] = ...
    else:
      state[t] = ...

    # write the expression for computing the observation
    obs[t] = ...

  return state, obs


# Uncomment below to test your function
# state, obs = sample_lds(100, params)
# print('sample at t=3 ', state[3])
# plot_kalman(state, obs, title='sample')

Click for solution

Example output:

Solution hint

Interactive Demo: Adjusting System Dynamics

To test your understanding of the parameters of a linear dynamical system, think about what you would expect if you made the following changes:

  1. Reduce observation noise \(R\)

  2. Increase respective temporal dynamics \(D\)

Use the interactive widget below to vary the values of \(R\) and \(D\).

Make sure you execute this cell to enable the widget!

#@title

#@markdown Make sure you execute this cell to enable the widget!

@widgets.interact(R=widgets.FloatLogSlider(1., min=-2, max=2),
                  D=widgets.FloatSlider(0.9, min=0.0, max=1.0, step=.01))
def explore_dynamics(R=0.1, D=0.5):
    params = {
    'D': D * np.eye(n_dim_state),  # state transition matrix
    'Q': np.eye(n_dim_obs),  # state noise covariance
    'H': np.eye(n_dim_state),  # observation matrix
    'R': R * np.eye(n_dim_obs),  # observation noise covariance
    'mu_0': np.zeros(n_dim_state),  # initial state mean,
    'sigma_0': 0.1 * np.eye(n_dim_state),  # initial state noise covariance
    }

    state, obs = sample_lds(100, params)
    plot_kalman(state, obs, title='sample')

Section 2: Kalman Filtering

Video 3: Kalman Filtering

Video available at https://youtu.be/VboZOV9QMOI

We want to infer the latent state variable \(s_t\) given the measured (observed) variable \(m_t\).

(384)\[\begin{equation} P(s_t|m_1, ..., m_t, m_{t+1}, ..., m_T)\sim \mathcal{N}(\hat{\mu}_t, \hat{\Sigma_t}) \end{equation}\]

First we obtain estimates of the latent state by running the filtering from \(t=0,....T\).

(385)\[\begin{equation} s_t^{\rm pred}\sim \mathcal{N}(\hat{\mu}_t^{\rm pred},\hat{\Sigma}_t^{\rm pred})\end{equation}\]

Where \(\hat{\mu}_t^{\rm pred}\) and \(\hat{\Sigma}_t^{\rm pred}\) are derived as follows:

(386)\[\begin{eqnarray} \hat{\mu}_1^{\rm pred} & = & D\hat{\mu}_{0} \\ \hat{\mu}_t^{\rm pred} & = & D\hat{\mu}_{t-1} \end{eqnarray}\]

This is the prediction for \(s_t\) obtained simply by taking the expected value of \(s_{t-1}\) and projecting it forward one step using the transition matrix \(D\). We do the same for the covariance, taking into account the noise covariance \(Q\) and the fact that scaling a variable by \(D\) scales its covariance \(\Sigma\) as \(D\Sigma D^T\):

(387)\[\begin{eqnarray} \hat{\Sigma}_0^{\rm pred} & = & D\hat{\Sigma}_{0}D^T+Q \\ \hat{\Sigma}_t^{\rm pred} & = & D\hat{\Sigma}_{t-1}D^T+Q \end{eqnarray}\]

We then use a Bayesian update from the newest measurements to obtain \(\hat{\mu}_t^{\rm filter}\) and \(\hat{\Sigma}_t^{\rm filter}\)

Project our prediction to observational space:

(388)\[\begin{equation} m_t^{\rm pred}\sim \mathcal{N}(H\hat{\mu}_t^{\rm pred}, H\hat{\Sigma}_t^{\rm pred}H^T+R) \end{equation}\]

update prediction by actual data:

(389)\[\begin{eqnarray} s_t^{\rm filter} & \sim & \mathcal{N}(\hat{\mu}_t^{\rm filter}, \hat{\Sigma}_t^{\rm filter}) \\ \hat{\mu}_t^{\rm filter} & = & \hat{\mu}_t^{\rm pred}+K_t(m_t-H\hat{\mu}_t^{\rm pred}) \\ \hat{\Sigma}_t^{\rm filter} & = & (I-K_tH)\hat{\Sigma}_t^{\rm pred} \end{eqnarray}\]

Kalman gain matrix:

(390)\[\begin{equation} K_t=\hat{\Sigma}_t^{\rm pred}H^T(H\hat{\Sigma}_t^{\rm pred}H^T+R)^{-1} \end{equation}\]

We use the latent-only prediction to project it to the observational space and compute a correction proportional to the error \(m_t-HDz_{t-1}\) between prediction and data. The coefficient of this correction is the Kalman gain matrix.

Interpretations

If measurement noise is small and dynamics are fast, then estimation will depend mostly on currently observed data. If the measurement noise is large, then the Kalman filter uses past observations as well, combining them as long as the underlying state is at least somewhat predictable.

In order to explore the impact of filtering, we will use the following noisy oscillatory system:

# task dimensions
n_dim_state = 2
n_dim_obs = 2

T=100

# initialize model parameters
params = {
  'D': np.array([[1., 1.], [-(2*np.pi/20.)**2., .9]]),  # state transition matrix
  'Q': np.eye(n_dim_obs),                               # state noise covariance
  'H': np.eye(n_dim_state),                             # observation matrix
  'R': 100.0 * np.eye(n_dim_obs),                       # observation noise covariance
  'mu_0': np.zeros(n_dim_state),                        # initial state mean
  'sigma_0': 0.1 * np.eye(n_dim_state),                 # initial state noise covariance
}

state, obs = sample_lds(T, params)
plot_kalman(state, obs, title='sample')

Exercise 2: Implement Kalman filtering

In this exercise you will implement the Kalman filter (forward) process. Your focus will be on writing the expressions for the Kalman gain, filter mean, and filter covariance at each time step (refer to the equations above).

def kalman_filter(data, params):
  """ Perform Kalman filtering (forward pass) on the data given the provided
  system parameters.

  Args:
    data (ndarray): a sequence of observations of shape(n_timesteps, n_dim_obs)
    params (dict): a dictionary of model parameters: (D, Q, H, R, mu_0, sigma_0)

  Returns:
    ndarray, ndarray: the filtered system means and noise covariance values
  """
  # pulled out of the params dict for convenience
  D = params['D']
  Q = params['Q']
  H = params['H']
  R = params['R']

  n_dim_state = D.shape[0]
  n_dim_obs = H.shape[0]
  I = np.eye(n_dim_state)  # identity matrix

  # state tracking arrays
  mu = np.zeros((len(data), n_dim_state))
  sigma = np.zeros((len(data), n_dim_state, n_dim_state))

  # filter the data
  for t, y in enumerate(data):
    if t == 0:
      mu_pred = params['mu_0']
      sigma_pred = params['sigma_0']
    else:
      mu_pred = D @ mu[t-1]
      sigma_pred = D @ sigma[t-1] @ D.T + Q

    ###########################################################################
    ## TODO for students: compute the filtered state mean and covariance values
    # Fill out function and remove
    raise NotImplementedError("Student exercise: compute the filtered state mean and covariance values")
    ###########################################################################
    # write the expression for computing the Kalman gain
    K = ...
    # write the expression for computing the filtered state mean
    mu[t] = ...
    # write the expression for computing the filtered state noise covariance
    sigma[t] = ...

  return mu, sigma


# Uncomment below to test your function
# filtered_state_means, filtered_state_covariances = kalman_filter(obs, params)
# plot_kalman(state, obs, filtered_state_means, title="my kf-filter",
#             color='r', label='my kf-filter')

Click for solution

Example output:

Solution hint

Section 3: Fitting Eye Gaze Data

Video 4: Fitting Eye Gaze Data

Video available at https://youtu.be/M7OuXmVWHGI

Tracking eye gaze is used in both experimental and user interface applications. Getting an accurate estimation of where someone is looking on a screen in pixel coordinates can be challenging, however, due to the various sources of noise inherent in obtaining these measurements. A main source of noise is the general accuracy of the eye tracker device itself and how well it maintains calibration over time. Changes in ambient light or subject position can further reduce accuracy of the sensor. Eye blinks introduce a different form of noise as interruptions in the data stream which also need to be addressed.

Fortunately we have a candidate solution for handling noisy eye gaze data in the Kalman filter we just learned about. Let’s look at how we can apply these methods to a small subset of data taken from the MIT Eyetracking Database [Judd et al. 2009]. This data was collected as part of an effort to model visual saliency – given an image, can we predict where a person is most likely going to look.

# load eyetracking data
subjects, images = load_eyetracking_data()

Interactive Demo: Tracking Eye Gaze

We have three stimulus images and five different subjects’ gaze data. Each subject fixated in the center of the screen before the image appeared, then had a few seconds to freely look around. You can use the widget below to see how different subjects visually scanned the presented image. A subject ID of -1 will show the stimulus images without any overlayed gaze trace.

Note that the images are rescaled below for display purposes, they were in their original aspect ratio during the task itself.

Make sure you execute this cell to enable the widget!

#@title

#@markdown Make sure you execute this cell to enable the widget!

@widgets.interact(subject_id=widgets.IntSlider(-1, min=-1, max=4),
                  image_id=widgets.IntSlider(0, min=0, max=2))
def plot_subject_trace(subject_id=-1, image_id=0):
  if subject_id == -1:
    subject = np.zeros((3, 0, 2))
  else:
    subject = subjects[subject_id]
  data = subject[image_id]
  img = images[image_id]

  fig, ax = plt.subplots()
  ax.imshow(img, aspect='auto')
  ax.scatter(data[:, 0], data[:, 1], c='m', s=100, alpha=0.7)
  ax.set(xlim=(0, img.shape[1]), ylim=(img.shape[0], 0))

Section 3.1: Fitting data with pykalman

Now that we have data, we’d like to use Kalman filtering to give us a better estimate of the true gaze. Up until this point we’ve known the parameters of our LDS, but here we need to estimate them from data directly. We will use the pykalman package to handle this estimation using the EM algorithm, a useful and influential learning algorithm described briefly in the bonus material.

Before exploring fitting models with pykalman it’s worth pointing out some naming conventions used by the library:

(391)\[\begin{align} D &: \texttt{transition_matrices} & Q &: \texttt{transition_covariance} \\ H &: \texttt{observation_matrices} & R &: \texttt{observation_covariance} \\ \mu_0 &: \texttt{initial_state_mean} & \Sigma_0 &: \texttt{initial_state_covariance} \end{align}\]

The first thing we need to do is provide a guess at the dimensionality of the latent state. Let’s start by assuming the dynamics line-up directly with the observation data (pixel x,y-coordinates), and so we have a state dimension of 2.

We also need to decide which parameters we want the EM algorithm to fit. In this case, we will let the EM algorithm discover the dynamics parameters i.e. the \(D\), \(Q\), \(H\), and \(R\) matrices.

We set up our pykalman KalmanFilter object with these settings using the code below.

# set up our KalmanFilter object and tell it which parameters we want to
# estimate
np.random.seed(1)

n_dim_obs = 2
n_dim_state = 2

kf = pykalman.KalmanFilter(
  n_dim_state=n_dim_state,
  n_dim_obs=n_dim_obs,
  em_vars=['transition_matrices', 'transition_covariance',
           'observation_matrices', 'observation_covariance']
)

Because we know from the reported experimental design that subjects fixated in the center of the screen right before the image appears, we can set the initial starting state estimate \(\mu_0\) as being the center pixel of the stimulus image (the first data point in this sample dataset) with a correspondingly low initial noise covariance \(\Sigma_0\). Once we have everything set, it’s time to fit some data.

# Choose a subject and stimulus image
subject_id = 1
image_id = 2
data = subjects[subject_id][image_id]

# Provide the initial states
kf.initial_state_mean = data[0]
kf.initial_state_covariance = 0.1*np.eye(n_dim_state)

# Estimate the parameters from data using the EM algorithm
kf.em(data)

print(f'D=\n{kf.transition_matrices}')
print(f'Q =\n{kf.transition_covariance}')
print(f'H =\n{kf.observation_matrices}')
print(f'R =\n{kf.observation_covariance}')
D=
[[ 1.004 -0.01 ]
 [ 0.005  0.989]]
Q =
[[278.016 219.292]
 [219.292 389.774]]
H =
[[ 0.999  0.003]
 [-0.004  1.01 ]]
R =
[[26.026 19.596]
 [19.596 26.745]]

We see that the EM algorithm has found fits for the various dynamics parameters. One thing you will note is that both the state and observation matrices are close to the identity matrix, which means the x- and y-coordinate dynamics are independent of each other and primarily impacted by the noise covariances.

We can now use this model to smooth the observed data from the subject. In addition to the source image, we can also see how this model will work with the gaze recorded by the same subject on the other images as well, or even with different subjects.

Below are the three stimulus images overlayed with recorded gaze in magenta and smoothed state from the filter in green, with gaze begin (orange triangle) and gaze end (orange square) markers.

Make sure you execute this cell to enable the widget!

#@title

#@markdown Make sure you execute this cell to enable the widget!

@widgets.interact(subject_id=widgets.IntSlider(1, min=0, max=4))
def plot_smoothed_traces(subject_id=0):
  subject = subjects[subject_id]
  fig, axes = plt.subplots(ncols=3, figsize=(18, 4))
  for data, img, ax in zip(subject, images, axes):
    ax = plot_gaze_data(data, img=img, ax=ax)
    plot_kf_state(kf, data, ax)

Discussion questions:

Why do you think one trace from one subject was sufficient to provide a decent fit across all subjects? If you were to go back and change the subject_id and/or image_id for when we fit the data using EM, do you think the fits would be different?

We don’t think the eye is exactly following a linear dynamical system. Nonetheless that is what we assumed for this exercise when we applied a Kalman filter. Despite the mismatch, these algorithms do perform well. Discuss what differences we might find between the true and assumed processes. What mistakes might be likely consequences of these differences?

Finally, recall that the original task was to use this data to help develop models of visual salience. While our Kalman filter is able to provide smooth estimates of observed gaze data, it’s not telling us anything about why the gaze is going in a certain direction. In fact, if we sample data from our parameters and plot them, we get what amounts to a random walk.

kf_state, kf_data = kf.sample(len(data))
ax = plot_gaze_data(kf_data, img=images[2])
plot_kf_state(kf, kf_data, ax)
../../../_images/W3D2_Tutorial4_60_0.png

This should not be surprising, as we have given the model no other observed data beyond the pixels at which gaze was detected. We expect there is some other aspect driving the latent state of where to look next other than just the previous fixation location.

In summary, while the Kalman filter is a good option for smoothing the gaze trajectory itself, especially if using a lower-quality eye tracker or in noisy environmental conditions, a linear dynamical system may not be the right way to approach the much more challenging task of modeling visual saliency.

Bonus

Review on Gaussian joint, marginal and conditional distributions

Assume

(392)\[\begin{eqnarray} z & = & \begin{bmatrix}x \\y\end{bmatrix}\sim N\left(\begin{bmatrix}a \\b\end{bmatrix}, \begin{bmatrix}A & C \\C^T & B\end{bmatrix}\right) \end{eqnarray}\]

then the marginal distributions are

(393)\[\begin{eqnarray} x & \sim & \mathcal{N}(a, A) \\ y & \sim & \mathcal{N}(b,B) \end{eqnarray}\]

and the conditional distributions are

(394)\[\begin{eqnarray} x|y & \sim & \mathcal{N}(a+CB^{-1}(y-b), A-CB^{-1}C^T) \\ y|x & \sim & \mathcal{N}(b+C^TA^{-1}(x-a), B-C^TA^{-1}C) \end{eqnarray}\]

important take away: given the joint Gaussian distribution we can derive the conditionals

Kalman Smoothing

Video 5: Kalman Smoothing and the EM Algorithm

Video available at https://youtu.be/4Ar2mYz1Nms

Obtain estimates by propagating from \(y_T\) back to \(y_0\) using results of forward pass (\(\hat{\mu}_t^{\rm filter}, \hat{\Sigma}_t^{\rm filter}, P_t=\hat{\Sigma}_{t+1}^{\rm pred}\))

(395)\[\begin{eqnarray} s_t & \sim & \mathcal{N}(\hat{\mu}_t^{\rm smooth}, \hat{\Sigma}_t^{\rm smooth}) \\ \hat{\mu}_t^{\rm smooth} & = & \hat{\mu}_t^{\rm filter}+J_t(\hat{\mu}_{t+1}^{\rm smooth}-D\hat{\mu}_t^{\rm filter}) \\ \hat{\Sigma}_t^{\rm smooth} & = & \hat{\Sigma}_t^{\rm filter}+J_t(\hat{\Sigma}_{t+1}^{\rm smooth}-P_t)J_t^T \\ J_t & = & \hat{\Sigma}_t^{\rm filter}D^T P_t^{-1} \end{eqnarray}\]

This gives us the final estimate for \(z_t\).

(396)\[\begin{eqnarray} \hat{\mu}_t & = & \hat{\mu}_t^{\rm smooth} \\ \hat{\Sigma}_t & = & \hat{\Sigma}_t^{\rm smooth} \end{eqnarray}\]

Exercise 3: Implement Kalman smoothing

In this exercise you will implement the Kalman smoothing (backward) process. Again you will focus on writing the expressions for computing the smoothed mean, smoothed covariance, and \(J_t\) values.

def kalman_smooth(data, params):
  """ Perform Kalman smoothing (backward pass) on the data given the provided
  system parameters.

  Args:
    data (ndarray): a sequence of observations of shape(n_timesteps, n_dim_obs)
    params (dict): a dictionary of model parameters: (D, Q, H, R, mu_0, sigma_0)

  Returns:
    ndarray, ndarray: the smoothed system means and noise covariance values
  """
  # pulled out of the params dict for convenience
  D= params['D']
  Q = params['Q']
  H = params['H']
  R = params['R']

  n_dim_state = D.shape[0]
  n_dim_obs = H.shape[0]

  # first run the forward pass to get the filtered means and covariances
  mu, sigma = kalman_filter(data, params)

  # initialize state mean and covariance estimates
  mu_hat = np.zeros_like(mu)
  sigma_hat = np.zeros_like(sigma)
  mu_hat[-1] = mu[-1]
  sigma_hat[-1] = sigma[-1]

  # smooth the data
  for t in reversed(range(len(data)-1)):
    sigma_pred = D@ sigma[t] @ D.T + Q  # sigma_pred at t+1
    ###########################################################################
    ## TODO for students: compute the smoothed state mean and covariance values
    # Fill out function and remove
    raise NotImplementedError("Student exercise: compute the smoothed state mean and covariance values")
    ###########################################################################

    # write the expression to compute the Kalman gain for the backward process
    J = ...
    # write the expression to compute the smoothed state mean estimate
    mu_hat[t] = ...
    # write the expression to compute the smoothed state noise covariance estimate
    sigma_hat[t] = ...

  return mu_hat, sigma_hat


# Uncomment once the kalman_smooth function is complete
# smoothed_state_means, smoothed_state_covariances = kalman_smooth(obs, params)
# axes = plot_kalman(state, obs, filtered_state_means, color="r",
#                    label="my kf-filter")
# plot_kalman(state, obs, smoothed_state_means, color="b",
#             label="my kf-smoothed", axes=axes)

Click for solution

Example output:

Solution hint

Forward vs Backward

Now that we have implementations for both, let’s compare their performance by computing the MSE between the filtered (forward) and smoothed (backward) estimated states and the true latent state.

print(f"Filtered MSE: {np.mean((state - filtered_state_means)**2):.3f}")
print(f"Smoothed MSE: {np.mean((state - smoothed_state_means)**2):.3f}")

In this example, the smoothed estimate is clearly superior to the filtered one. This makes sense as the forward pass uses only the past measurements, whereas the backward pass can use future measurement too, correcting the forward pass estimates given all the data we’ve collected.

So why would you ever use Kalman filtering alone, without smoothing? As Kalman filtering only depends on already observed data (i.e. the past) it can be run in a streaming, or on-line, setting. Kalman smoothing relies on future data as it were, and as such can only be applied in a batch, or off-line, setting. So use Kalman filtering if you need real-time corrections and Kalman smoothing if you are considering already-collected data.

This online case is typically what the brain faces.

The Expectation-Maximization (EM) Algorithm

  • want to maximize \(\log p(m|\theta)\)

  • need to marginalize out latent state (which is not tractable)

(397)\[\begin{equation} p(m|\theta)=\int p(m,s|\theta)dz \end{equation}\]
  • add a probability distribution \(q(s)\) which will approximate the latent state distribution

\[\log p(m|\theta)\int_s q(s)dz\]
  • can be rewritten as

(398)\[\begin{equation} \mathcal{L}(q,\theta)+KL\left(q(s)||p(s|m),\theta\right) \end{equation}\]
  • \(\mathcal{L}(q,\theta)\) contains the joint distribution of \(m\) and \(s\)

  • \(KL(q||p)\) contains the conditional distribution of \(s|m\)

Expectation step

  • parameters are kept fixed

  • find a good approximation \(q(s)\): maximize lower bound \(\mathcal{L}(q,\theta)\) with respect to \(q(s)\)

  • (already implemented Kalman filter+smoother)

Maximization step

  • keep distribution \(q(s)\) fixed

  • change parameters to maximize the lower bound \(\mathcal{L}(q,\theta)\)

As mentioned, we have already effectively solved for the E-Step with our Kalman filter and smoother. The M-step requires further derivation, which is covered in the Appendix. Rather than having you implement the M-Step yourselves, let’s instead turn to using a library that has already implemented EM for exploring some experimental data from cognitive neuroscience.

The M-step for a LDS

(see Bishop, chapter 13.3.2 Learning in LDS) Update parameters of the probability distribution

For the updates in the M-step we will need the following posterior marginals obtained from the Kalman smoothing results* \(\hat{\mu}_t^{\rm smooth}, \hat{\Sigma}_t^{\rm smooth}\)

(399)\[\begin{eqnarray} E(s_t) &=& \hat{\mu}_t \\ E(s_ts_{t-1}^T) &=& J_{t-1}\hat{\Sigma}_t+\hat{\mu}_t\hat{\mu}_{t-1}^T\\ E(s_ts_{t}^T) &=& \hat{\Sigma}_t+\hat{\mu}_t\hat{\mu}_{t}^T \end{eqnarray}\]

Update parameters

Initial parameters

(400)\[\begin{eqnarray} \mu_0^{\rm new}&=& E(s_0)\\ Q_0^{\rm new} &=& E(s_0s_0^T)-E(s_0)E(s_0^T) \\ \end{eqnarray}\]

Hidden (latent) state parameters

(401)\[\begin{eqnarray} D^{\rm new} &=& \left(\sum_{t=2}^N E(s_ts_{t-1}^T)\right)\left(\sum_{t=2}^N E(s_{t-1}s_{t-1}^T)\right)^{-1} \\ Q^{\rm new} &=& \frac{1}{T-1} \sum_{t=2}^N E\big(s_ts_t^T\big) - D^{\rm new}E\big(s_{t-1}s_{t}^T\big) - E\big(s_ts_{t-1}^T\big)D^{\rm new}+D^{\rm new}E\big(s_{t-1}s_{t-1}^T\big)\big(D^{\rm new}\big)^{T}\\ \end{eqnarray}\]

Observable (measured) space parameters

(402)\[\begin{eqnarray} H^{\rm new} &=& \left(\sum_{t=1}^N y_t E(s_t^T)\right)\left(\sum_{t=1}^N E(s_t s_t^T)\right)^{-1}\\ R^{\rm new} &=& \frac{1}{T}\sum_{t=1}^Ny_ty_t^T-H^{\rm new}E(s_t)y_t^T-y_tE(s_t^T)H^{\rm new}+H^{\rm new}E(s_ts_t^T)H_{\rm new} \end{eqnarray}\]