Intro#
Overview#
Today, we’ll learn all about dimensionality reduction, a machine learning technique that we use all the time in neuroscience! Dimensionality reduction means transforming data from a high-dimensional space into a lower-dimensional space. In the intro, Byron Yu will introduce you to several ways to perform dimensionality reduction and discuss how they’re used to investigate the possible underlying low-dimensional structure of neural population activity. Tutorial 1 sets up the foundational knowledge for the main type of dimensionality reduction we’ll cover: Principal Components Analysis (PCA). In particular, it will review key linear algebra components such as orthonormal bases, changing bases, and correlation. Tutorial 2 then covers the specific math behind PCA: how we compute it and project data onto the principal components. Tutorial 3 covers how we assess how many dimensions (or principal components) we need to represent the data accurately. Finally, in Tutorial 4, you will briefly be introduced to a second dimensionality reduction method, a non-linear method called t-SNE. You’ll hear from Byron Yu again in the outro, where he will connect dimensionality reduction to brain-computer interfaces.
Dimensionality reduction is a core machine learning technique used constantly in neuroscience for various reasons. Neuroscientists use it as a simple data analysis tool to compress data into a lower number of dimensions better suited for further analysis and modeling. For example, decoding all pixels of an image from neural data is difficult as that is 100s to 1000s of dimensions. We could instead compress to a lower-dimensional version of those images and decode that using the methods and models we’ve already learned about (such as linear regression in Model Fitting). We can also visualize data better in lower dimensions - you’ll use PCA to visualize the internal representations of a deep network and mouse brain in Deep Learning. We can also use dimensionality reduction to think about and investigate low-dimensional structure in the brain - whether it exists, how it is formed, and so on. You’ll see more insight into this if you look through the bonus day materials on Autoencoders.
Install and import feedback gadget#
Show code cell source
# @title Install and import feedback gadget
!pip3 install vibecheck datatops --quiet
from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
return DatatopsContentReviewContainer(
"", # No text prompt
notebook_section,
{
"url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
"name": "neuromatch_cn",
"user_key": "y1x3mpx5",
},
).render()
feedback_prefix = "W1D4_Intro"
Video#
Slides#
Submit your feedback#
Show code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Intro")