Open In Colab

Intro

Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs

Overview

This day introduces you to some of the applications of deep learning in neuroscience. In the intro, Aude Oliva covers the basics of convolutional neural networks trained to do image recognition and how to compare these artificial neural networks to neural activity in the brain. In the three tutorials, we apply deep learning principles in three key ways they are used in neuroscience: decoding models, encoding models, and representational similarity analysis. In each of the tutorials we use the same neural activity which was recorded from the visual cortex of awake mice while the mice were presented oriented grating stimuli. In tutorial 1, we start with even simpler neural networks consisting of fully connected linear layers. We introduce non-linear activation functions and how to optimize these deep networks using pytorch and back-propagation. We optimize the network to decode the presented visual stimulus from the recorded neural activity in visual cortex. Next, in tutorial 2, we introduce convolutional layers, the building blocks of networks for visual tasks. The bonus in that tutorial is to fit an encoding model from visual stimuli to neural activity. Finally, in tutorial 3, we optimize a convolutional neural network to perform an orientation discrimination task and compare the internal representations of the artificial neural networks to neural activity using a technique called representational similarity analysis. In the outro, the caveats of treating neural activity like a deep convolutional neural network are introduced and explored, including approaches to make deep networks more biologically plausible. In the second optional outro, deep learning is used to perform pose estimation of infants and used to make clinical judgments.

There is a growing need for data analysis tools as neuroscientists gain the ability to record larger neural populations during more complex behaviors. Deep neural networks can approximate a wide range of non-linear functions and can be easily fit, allowing them to be flexible model architectures for building decoding and encoding models of large-scale data. Generalized linear models were used as decoding and encoding models in W1D4 Machine Learning. A model that decodes a variable from neural activity can tell us how much information a brain area contains about that variable. An encoding model is a model from an input variable, like visual stimulus, to neural activity. The encoding model is meant to approximate the same transformation that the brain performs on input variables and therefore help us understand how the brain represents information.

The final application of deep neural networks in tutorial 3 is the most common one in neuroscience currently and involves comparing the activity of artificial neural networks to brain activity. Since deep convolutional neural networks are the only types of models that can perform at human accuracy on visual tasks like object recognition, it can often make sense to use them as starting points for comparison with neural data. This comparison can be done at a variety of scales, such as at the population level as in the tutorial, or at the level of single neurons and single units in the deep network. This type of research can help answer questions such as what types of datasets and tasks create neural networks that best approximate the brain (e.g. neural taskonomy), and what that means for the architecture and learning rules of the brain. There are other complex tasks that deep networks are trained for that involve learning how to explore environments and determine rewarding stimuli, stay tuned for more of this in W3D4 Reinforcement Learning.

Video

Slides