[Theory] REMINDER: 2/7 Talks at TTIC: Xiang Lorraine Li, UMass Amherst

Mary Marre mmarre at ttic.edu
Mon Feb 7 10:50:48 CST 2022


*When:        *Monday, February 7th at *11:30am CT*

*Where:*       Zoom Virtual Talk (*register in advance here
<https://uchicagogroup.zoom.us/webinar/register/WN_2OEpDElMRPmwXx6tyxneKA>*)

*Who: *         Xiang Lorraine Li, UMass Amherst


*Title: *         Probabilistic Commonsense Knowledge in NLP

*Abstract: *Commonsense knowledge is critical to achieving artificial
general intelligence. This shared common background knowledge is implicit
in all human communication, facilitating efficient information exchange and
understanding. But commonsense research is hampered by its immense quantity
of knowledge because an explicit categorization is impossible. Furthermore,
a plumber could repair a sink in a kitchen or a bathroom, indicating that
common sense reveals a probable assumption rather than a definitive answer.
To align with these properties of commonsense fundamentally, we want to not
only model but also evaluate such knowledge human-like using abstractions
and probabilistic principles.

Traditional combinatorial probabilistic models, e.g., PGM approaches, have
limitations to modeling large-scale probability distributions containing
thousands or even millions of commonsensical events. On the other hand,
although embedding-based representation learning has the advantage of
generalizing to large combinations of events, they suffer from producing
consistent probabilities under different styles of queries. Combining
benefits from both sides, we introduce probabilistic box embeddings, which
represent joint probability distributions on a learned latent space of
geometric embeddings. By using box embeddings, it is now possible to handle
queries with intersections, unions, and negations in a way similar to Venn
diagram reasoning, which has faced difficulty even when using large
language models.

Meanwhile, existing evaluations do not reflect the probabilistic nature of
commonsense knowledge. The popular multiple-choice evaluation style often
misleads us into the paradigm that commonsense solved. To fill in the gap,
we propose a method of retrieving commonsense related question answer
distributions from human annotators as well as a novel method of generative
evaluation. We utilize these approaches in two new commonsense datasets
(ProtoQA, Commonsense frame completion).

*Host*: *Karen Livescu* <klivescu at ttic.edu>


Mary C. Marre
Faculty Administrative Support
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Chicago, IL  60637*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Sun, Feb 6, 2022 at 3:46 PM Mary Marre <mmarre at ttic.edu> wrote:

> *When:        *Monday, February 7th at *11:30am CT*
>
> *Where:*       Zoom Virtual Talk (*register in advance here
> <https://uchicagogroup.zoom.us/webinar/register/WN_2OEpDElMRPmwXx6tyxneKA>*
> )
>
> *Who: *         Xiang Lorraine Li, UMass Amherst
>
>
> *Title: *         Probabilistic Commonsense Knowledge in NLP
>
> *Abstract: *Commonsense knowledge is critical to achieving artificial
> general intelligence. This shared common background knowledge is implicit
> in all human communication, facilitating efficient information exchange and
> understanding. But commonsense research is hampered by its immense quantity
> of knowledge because an explicit categorization is impossible. Furthermore,
> a plumber could repair a sink in a kitchen or a bathroom, indicating that
> common sense reveals a probable assumption rather than a definitive answer.
> To align with these properties of commonsense fundamentally, we want to not
> only model but also evaluate such knowledge human-like using abstractions
> and probabilistic principles.
>
> Traditional combinatorial probabilistic models, e.g., PGM approaches, have
> limitations to modeling large-scale probability distributions containing
> thousands or even millions of commonsensical events. On the other hand,
> although embedding-based representation learning has the advantage of
> generalizing to large combinations of events, they suffer from producing
> consistent probabilities under different styles of queries. Combining
> benefits from both sides, we introduce probabilistic box embeddings, which
> represent joint probability distributions on a learned latent space of
> geometric embeddings. By using box embeddings, it is now possible to handle
> queries with intersections, unions, and negations in a way similar to Venn
> diagram reasoning, which has faced difficulty even when using large
> language models.
>
> Meanwhile, existing evaluations do not reflect the probabilistic nature of
> commonsense knowledge. The popular multiple-choice evaluation style often
> misleads us into the paradigm that commonsense solved. To fill in the gap,
> we propose a method of retrieving commonsense related question answer
> distributions from human annotators as well as a novel method of generative
> evaluation. We utilize these approaches in two new commonsense datasets
> (ProtoQA, Commonsense frame completion).
>
> *Host*: *Karen Livescu* <klivescu at ttic.edu>
>
>
>
>
>
> Mary C. Marre
> Faculty Administrative Support
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Chicago, IL  60637*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Mon, Jan 31, 2022 at 6:23 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:        *Monday, February 7th at *11:30am CT*
>>
>> *Where:*       Zoom Virtual Talk (*register in advance here
>> <https://uchicagogroup.zoom.us/webinar/register/WN_2OEpDElMRPmwXx6tyxneKA>*
>> )
>>
>> *Who: *         Xiang Lorraine Li, UMass Amherst
>>
>>
>> *Title: *         Probabilistic Commonsense Knowledge in NLP
>>
>> *Abstract: *Commonsense knowledge is critical to achieving artificial
>> general intelligence. This shared common background knowledge is implicit
>> in all human communication, facilitating efficient information exchange and
>> understanding. But commonsense research is hampered by its immense quantity
>> of knowledge because an explicit categorization is impossible. Furthermore,
>> a plumber could repair a sink in a kitchen or a bathroom, indicating that
>> common sense reveals a probable assumption rather than a definitive answer.
>> To align with these properties of commonsense fundamentally, we want to not
>> only model but also evaluate such knowledge human-like using abstractions
>> and probabilistic principles.
>>
>> Traditional combinatorial probabilistic models, e.g., PGM approaches,
>> have limitations to modeling large-scale probability distributions
>> containing thousands or even millions of commonsensical events. On the
>> other hand, although embedding-based representation learning has the
>> advantage of generalizing to large combinations of events, they suffer from
>> producing consistent probabilities under different styles of queries.
>> Combining benefits from both sides, we introduce probabilistic box
>> embeddings, which represent joint probability distributions on a learned
>> latent space of geometric embeddings. By using box embeddings, it is now
>> possible to handle queries with intersections, unions, and negations in a
>> way similar to Venn diagram reasoning, which has faced difficulty even when
>> using large language models.
>>
>> Meanwhile, existing evaluations do not reflect the probabilistic nature
>> of commonsense knowledge. The popular multiple-choice evaluation style
>> often misleads us into the paradigm that commonsense solved. To fill in the
>> gap, we propose a method of retrieving commonsense related question answer
>> distributions from human annotators as well as a novel method of generative
>> evaluation. We utilize these approaches in two new commonsense datasets
>> (ProtoQA, Commonsense frame completion).
>>
>> *Host*: *Karen Livescu* <klivescu at ttic.edu>
>>
>>
>>
>>
>> Mary C. Marre
>> Faculty Administrative Support
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Chicago, IL  60637*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20220207/a804ca24/attachment.html>


More information about the Theory mailing list