Open In Colab

Intro

Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs

Overview

Today, we’ll learn all about dimensionality reduction, a machine learning technique which we use all the time in neuroscience! Dimensionality reduction means that we are transforming data from a high-dimensional space into a lower-dimensional space. In the intro, Byron Yu will introduce you to several ways to performing dimensionality reduction and discuss how they’re used to investigate possible underlying low-dimensional structure of neural population activity. Tutorial 1 sets up the foundational knowledge for the main type of dimensionality reduction we’ll cover: Principal Components Analysis (PCA). In particular, it will review key linear algebra components such as orthonormal bases, changing bases, and correlation. Tutorial 2 then covers the specific math behind PCA: how we compute it and project data onto the principal components. Tutorial 3 covers how we assess how many dimensions (or principal components) we need to accurately represent the data. Finally, in Tutorial 4 you will be briefly introduced to a second method of dimensionality reduction, a non-linear method called t-SNE. You’ll hear from Byron Yu again in the outro, where he will connect dimensionality reduction to brain-computer interfaces.

Dimensionality reduction is a core machine learning technique that is used constantly in neuroscience for a variety of reasons. Neuroscientists use it as a simple data analysis tool to compress data into a lower number of dimensions that is better suited for further analysis and modeling. For example, decoding all pixels of an image from neural data is difficult as that is 100s to 1000s of dimensions. We could instead compress to a lower dimensional version of those images and decode that using the methods and models we’ve already learned about (such as linear regression in Model Fitting). We can also visualize data better in lower dimensions - you’ll use PCA to visualize the internal representations of a deep network and mouse brain in Deep Learning. We can also use dimensionality reduction to think about and investigate low-dimensional structure in the brain - whether it exists, how it is formed, and so on. You’ll see more insight into this if you look through the bonus day materials on Autoencoders that will appear shortly in the jupyterbook.

Video

Slides