[Theory] REMINDER: 5/10 TTIC Distinguished Lecture Series: Jason Eisner, John Hopkins University

Brandie Jones bjones at ttic.edu
Fri May 5 15:00:00 CDT 2023


*When:    * Wednesday, May 10th at *11:30 AM CT*



*Where:    *Talk will be given *live, in-person* at

                     TTIC, 6045 S. Kenwood Avenue

                     5th Floor, Room 530


*Virtually:  *  via Panopto* (Livestream
<https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6355939c-49d5-496a-9d9e-af8e0117f336>)
*



*Who:         *Jason Eisner, John Hopkins University

*Title:        *Putting Planning and Reasoning Inside Language Models

*Abstract:   *Large autoregressive language models have been amazingly
successful.  Nonetheless, should they be integrated with older AI
techniques such as explicit knowledge representation, planning, and
inference?  I'll discuss three possible reasons:

1. Capacity: Current autoregressive models lack the computational capacity
to attack combinatorially hard problems.
2. Modularization: Results could be improved by consulting up-to-date
domain knowledge, domain-specific theories, and systematic reasoning.
3. Interpretability: Ideally, generated answers should be able to discuss
the underlying reasoning and the certainty of its conclusions.


As possible directions, I'll outline some costly but interesting extensions
to the standard autoregressive language models -- neural FSTs, lookahead
models, and nested latent-variable models.  Much of this work is still in
progress, so the focus will be on designs rather than results.
Collaborators include Chu-Cheng Lin, Weiting (Steven) Tan, Li (Leo) Du,
Zhichu (Brian) Lu, and Hongyuan Mei

Bio:    Jason Eisner is a Professor of Computer Science at Johns Hopkins
University, as well as Director of Research at Microsoft Semantic Machines.
He is a Fellow of the Association for Computational Linguistics. At Johns
Hopkins, he is also affiliated with the Center for Language and Speech
Processing, the Mathematical Institute for Data Science, and the Cognitive
Science Department. His goal is to develop the probabilistic modeling,
inference, and learning techniques needed for a unified model of all kinds
of linguistic structure. His 150+ papers have presented various algorithms
for parsing, machine translation, and weighted finite-state machines;
formalizations, algorithms, theorems, and empirical results in
computational phonology; and unsupervised or semi-supervised learning
methods for syntax, morphology, and word-sense disambiguation. He is also
the lead designer of Dyna, a declarative programming language that provides
an infrastructure for AI algorithms. He has received two school-wide awards
for excellence in teaching, as well as recent Best Paper Awards at ACL
2017, EMNLP 2019, and NAACL 2021 and an Outstanding Paper Award at ACL 2022.

Hos*t: Karen Livescu <klivescu at ttic.edu>*

--
*Brandie Jones *
*Executive **Administrative Assistant*
Toyota Technological Institute
6045 S. Kenwood Avenue
Chicago, IL  60637
www.ttic.edu
Working Remotely on Tuesdays
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20230505/633cf46c/attachment.html>


More information about the Theory mailing list