[Colloquium] TODAY: Stefanie Jegelka (MIT) – Understanding Generalization and Improving Invariances in Graph Representation Learning

Rob Mitchum rmitchum at uchicago.edu
Fri Oct 28 09:43:18 CDT 2022


*NOTE ON LUNCH: Due to another event in 390, we will serve lunch outside of
the room. Please take a box from the left side (facing the doors of 390;
the table with the DSI sign on it), and don't enter 390 until 12:20. Thank
you!*

*Friday, October 28th*
*12:00pm - 1:30pm (12:00 lunch, 12:30 talk)*
*In Person: John Crerar Library 390*
*Zoom: *
*https://uchicagogroup.zoom.us/j/95352431675?pwd=L2x1ek40K1dmTVBtQ3c3NU8xVUk3QT09*
<https://www.google.com/url?q=https://uchicagogroup.zoom.us/j/95352431675?pwd%3DL2x1ek40K1dmTVBtQ3c3NU8xVUk3QT09&sa=D&source=calendar&ust=1667328223323792&usg=AOvVaw36Y5_SoqT5hsuY6l6xU36x>

Meeting ID: 953 5243 1675
Passcode: 406911

*Stefanie Jegelka*
*X-Consortium Career Development Associate Professor, EECS*
*Massachusetts Institute of Technology*

*Understanding Generalization and Improving Invariances in Graph
Representation Learning*

*Abstract*: Graph representation learning is a recurring task in
applications such as computational chemistry, recommendation, reasoning, or
learning for combinatorial optimization. Throughout, understanding the
generalization, invariances and out-of-distribution robustness of graph
neural networks is an important challenge.

First, we consider out-of-distribution generalization in widely used
message passing graph neural networks (MPGNNs). We aim to understand
conditions under which such generalization is possible. Another important
consideration for defining data shifts is an appropriate metric. We show
that a pseudometric combining trees and optimal transport correlates well
with the stability of MPGNNs.

Second, many approaches to graph representation learning exploit spectral
information. However, eigenvectors and eigenspaces demand specific model
invariances to process them in a consistent way. We propose a new
architecture that encodes these invariances, can be combined with MPNNs,
transformers and other set architectures, and theoretically and empirically
goes beyond existing models.

This talk is based on joint work with Ching-Yao Chuang, Joshua Robinson,
Derek Lim, Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi
Kawarabayashi, Lingxiao Zhao, Tess Smidt, Suvrit Sra and Haggai Maron.

*Bio*: Stefanie Jegelka <https://people.csail.mit.edu/stefje/> is an
X-Consortium Career Development Associate Professor in the Department of
EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL),
the Center for Statistics and an affiliate of IDSS and ORC. Before joining
MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD
from ETH Zurich and the Max Planck Institute for Intelligent Systems.
Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a
DARPA Young Faculty Award, Google research awards, a Two Sigma faculty
research award, the German Pattern Recognition Award and a Best Paper Award
at the International Conference for Machine Learning (ICML). She has also
been invited as a sectional lecturer at the ICM 2022. She has served as an
Area Chair for NeurIPS and ICML, as Action Editor for JMLR and as Program
Chair for ICML 2022. Her research interests span the theory and practice of
algorithmic machine learning.


-- 
*Rob Mitchum*

*Associate Director of Communications for Data Science and Computing*
*Department of Computer Science*
*Data Science Institute*
*University of Chicago*
*rmitchum at uchicago.edu <rmitchum at ci.uchicago.edu>*
*773-484-9890*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/colloquium/attachments/20221028/81b66645/attachment.html>


More information about the Colloquium mailing list