<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div class="gmail_default"><div dir="ltr"><div class="gmail_default"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Wednesday, February 24th at<b> 11:10 am CT</b></font></font><br></font></div></div><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font><font color="#000000">Zoom Virtual Talk (</font><b><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_qJjjpBu2QTeaVuNFFUuANA" target="_blank">register in advance here</a></font></b><font color="#000000">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p></div><div class="gmail_default"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Hongyuan Mei, Johns Hopkins University</div><br></div><div class="gmail_default"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b><br></b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Title: </b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Probabilistic Modeling for Event Sequences</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Abstract:</b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Suppose we are monitoring discrete events in real time. Can we predict what events will occur in the future, and when? For example, can we probabilistically predict a patient's prognosis, eventual diagnosis, and treatment cost based on their symptoms and treatments so far? What will an online customer buy in future? What will a social media user share, like, or comment on? What workload will a computer system receive over the next 5 minutes?</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This talk will present the neural Hawkes process (NHP), a flexible probabilistic model that supports such reasoning. I will sketch methods for estimating its parameters (via MLE and NCE), sampling predictions of the future (via rejection sampling), and imputing past events that we have missed (via particle smoothing). </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I'll then show how to scale the NHP (or neural sequential models in general) to real-world domains that involve many event types. We begin with a temporal deductive database that tracks how relevant facts including the possible event types change over time. We take the system state to be a collection of vector-space embeddings of these facts, and derive a deep recurrent dynamic neural architecture from the temporal Datalog program that specifies the temporal database. We call this method ``neural Datalog through time.''</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I'll also sketch a few future research directions, including embedding the NHP model within a reinforcement learner to discover causal structure and learn intervention policies that can improve future outcomes.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This work was done with Jason Eisner and other collaborators including Guanghui Qin, Tom Wan, and Minjie Xu. </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Hongyuan Mei is a Ph.D. student in the Department of Computer Science at the Johns Hopkins University (JHU), affiliated with the Center for Language and Speech Processing (CLSP). He is a Bloomberg Data Science PhD Fellow and the 2020 recipient of the Frederick Jelinek Fellowship. He develops machine learning methods for real-world problems -- especially probabilistic models for both structured and unstructured data, along with efficient algorithms for training and inference, with applications in event sequence modeling and natural language processing. His papers have appeared at NeurIPS, ICML, NAACL, and AAAI. </span></p></div><div class="gmail_default"><br></div><div class="gmail_default"><br></div><div class="gmail_default"><b>Host: </b><a href="mailto:mwalter@ttic.edu" target="_blank">Matthew Walter</a></div><div class="gmail_default"><br></div><div class="gmail_default"><br></div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 18, 2021 at 9:03 PM Mary Marre <<a href="mailto:mmarre@ttic.edu">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small"><div dir="ltr"><div><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b> </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit"> Wednesday, February 24th at<b> 11:10 am CT</b></font></font><br></font></div></div><div dir="ltr"><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b> </font></font><font color="#000000">Zoom Virtual Talk (</font><b><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/webinar/register/WN_qJjjpBu2QTeaVuNFFUuANA" target="_blank">register in advance here</a></font></b><font color="#000000">)</font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p></div><div><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b> </font></font></font>Hongyuan Mei, Johns Hopkins University</div><br></div><div style="font-size:small"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b><br></b></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Title: </b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Probabilistic Modeling for Event Sequences</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Abstract:</b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Suppose we are monitoring discrete events in real time. Can we predict what events will occur in the future, and when? For example, can we probabilistically predict a patient's prognosis, eventual diagnosis, and treatment cost based on their symptoms and treatments so far? What will an online customer buy in future? What will a social media user share, like, or comment on? What workload will a computer system receive over the next 5 minutes?</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This talk will present the neural Hawkes process (NHP), a flexible probabilistic model that supports such reasoning. I will sketch methods for estimating its parameters (via MLE and NCE), sampling predictions of the future (via rejection sampling), and imputing past events that we have missed (via particle smoothing). </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I'll then show how to scale the NHP (or neural sequential models in general) to real-world domains that involve many event types. We begin with a temporal deductive database that tracks how relevant facts including the possible event types change over time. We take the system state to be a collection of vector-space embeddings of these facts, and derive a deep recurrent dynamic neural architecture from the temporal Datalog program that specifies the temporal database. We call this method ``neural Datalog through time.''</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I'll also sketch a few future research directions, including embedding the NHP model within a reinforcement learner to discover causal structure and learn intervention policies that can improve future outcomes.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This work was done with Jason Eisner and other collaborators including Guanghui Qin, Tom Wan, and Minjie Xu. </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><b>Bio:</b> </span><span style="background-color:transparent;color:rgb(0,0,0);font-family:Arial;white-space:pre-wrap">Hongyuan Mei is a Ph.D. student in the Department of Computer Science at the Johns Hopkins University (JHU), affiliated with the Center for Language and Speech Processing (CLSP). He is a Bloomberg Data Science PhD Fellow and the 2020 recipient of the Frederick Jelinek Fellowship. He develops machine learning methods for real-world problems -- especially probabilistic models for both structured and unstructured data, along with efficient algorithms for training and inference, with applications in event sequence modeling and natural language processing. His papers have appeared at NeurIPS, ICML, NAACL, and AAAI. </span></p></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><b>Host: </b><a href="mailto:mwalter@ttic.edu" target="_blank">Matthew Walter</a></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"><br></div><div style="font-size:small"> </div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Mary C. Marre</font><div><font face="arial, helvetica, sans-serif">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">6045 S. Kenwood Avenue</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Room 517</font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i></div><div><i><font face="arial, helvetica, sans-serif">p:(773) 834-1757</font></i></div><div><i><font face="arial, helvetica, sans-serif">f: (773) 357-6970</font></i></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div></div>