[Colloquium] TODAY, 4:30: Data Science/Stats Candidate Talk - Matthew Golub (Stanford)

Rob Mitchum rmitchum at uchicago.edu
Tue Mar 1 11:39:01 CST 2022


*Data Science Institute/Statistics Candidate Seminar*


*Matthew Golub*
*Postdoctoral Fellow*
*Stanford University*

*Tuesday, March 1st*
*4:30 p.m. - 5:30 p.m.*
*In Person: John Crerar Library, Room 390*
*Remote: Live Stream <http://live.cs.uchicago.edu/mattgolub/> or Zoom
<https://uchicago.zoom.us/j/98886944120?pwd=eGtpNTZCSTdQQWJrUUJnSXd4TkZwUT09>
(details
below)*


*Reverse Engineering Computation in the Brain*

Behavior and cognition are driven by the coordinated activity of
populations of neurons in the brain. A major challenge in systems
neuroscience is to infer the computational principles underlying the
activity of these neural populations. What are the algorithms implemented
by neural populations? How can we design experiments and analyses with
hypotheses about computation in mind? My research is aimed at developing
the theory, modeling, and machine learning techniques needed to realize
this vision of reverse engineering computation in the brain. Progress in
this research could lead to new treatments for neurological injuries and
disorders, new paradigms for optimizing our behavior and cognition, and new
approaches to generating artificial intelligence.

In this talk, I will present lines of previous, ongoing, and proposed
research that highlight the potential of this vision. First, I will present
a line of brain-computer interface experiments and modeling that revealed
principles guiding neural populations as they reorganize during learning.
Here, we discovered constraints faced by neural populations, which
predicted empirically observed limitations to behavioral improvement with
learning. Second, I will present a framework of recurrent neural network
(RNN) modeling for identifying the computations performed by a population
of recorded neurons. Here, we train RNN-based sequential variational
autoencoders to recapitulate single-trial neural population activity, and
then we decompose the RNN into a collection of interpretable dynamical
motifs that reveal the computation performed by the neural population.
Finally, I will propose future plans for leveraging these tools and
insights toward i) dissecting the interplay between attention and decision
making, ii) optimizing stimulation of neural population dynamics, and iii)
accelerating learning in the brain.

*Bio*: Matthew Golub <https://web.stanford.edu/~mgolub/> is a Postdoctoral
Fellow in Electrical Engineering at Stanford University. His research
interests are at the intersection of machine learning, neuroengineering,
and basic systems neuroscience. His current projects focus on interpreting
nonlinear dynamical systems models of neural population activity underlying
decision making processes in the brain. This work has been recognized by a
Pathway to Independence Award from the National Institutes of Health.
Previously, Matthew received his PhD in Electrical and Computer Engineering
(ECE) from Carnegie Mellon University, where he developed brain-computer
interfaces as a scientific paradigm for investigating the neural bases of
learning and feedback motor control. His thesis was recognized by the ECE
Department’s Best Thesis Award.

*Host*: Brent Doiron

*Zoom Info:*
https://uchicago.zoom.us/j/98886944120?pwd=eGtpNTZCSTdQQWJrUUJnSXd4TkZwUT09
ID: 988 8694 4120
Password: ds2022


-- 
*Rob Mitchum*

*Associate Director of Communications for Data Science and Computing*
*University of Chicago*
*rmitchum at uchicago.edu <rmitchum at ci.uchicago.edu>*
*773-484-9890*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20220301/7074e0a0/attachment.html>


More information about the Colloquium mailing list