<div dir="ltr"><font face="arial, sans-serif"><span id="gmail-m_3101035450978274288gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><p style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;line-height:normal;margin:0px"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>When:</b>    </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">  Monday, January 27th at 11:00am</font></font></p></span><br><span id="gmail-m_3101035450978274288gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"></span></font><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Where:</b>     </font></font><font style="vertical-align:inherit"><font style="vertical-align:inherit">TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526</font></font></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"> </font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;text-align:justify;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><font style="vertical-align:inherit"><font style="vertical-align:inherit"><b>Who: </b>       </font></font>Subhransu Maji, University Of Massachusetts Amherst</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;text-align:justify;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="arial, sans-serif"><br></font></p><p style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><font face="arial, sans-serif"><span style="color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Title:</span><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">        Task-specific Recognition by Modeling Visual Tasks and their Relations</span></font></p><span id="gmail-m_3101035450978274288gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><font face="arial, sans-serif"><br><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Abstract: </span><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The AI revolution powered in part by advances in deep learning has led to many successes in the last decade. Among others, it has enabled us to study various ecological phenomena by analyzing data from weather RADAR networks at an unprecedented scale (e.g., [1, 2]). Yet the vast majority of important applications remain beyond the scope of current AI systems. One of the barriers is that existing algorithms lack the ability to learn from limited training data. This is a fundamental challenge because most real-world data is heavy-tailed and supervision is hard to acquire. I argue that a principled framework for reasoning about AI problems can enable modular and data-efficient solutions. Towards this end I will describe our framework for modeling computer vision tasks and relations between them (e.g., [3, 4]). Our approach called "task2vec" computes a vector representation of a task using the Fisher information of a generic “probe” network. We show that the distance between these vectors correlates with natural metrics between the domains and labels of tasks. It is also predictive of transfer, i.e., how much does training a deep network on one task benefit another, and can be used for model recommendation. On a portfolio of hundreds of vision tasks the recommended network outperforms the current gold standard of fine-tuning an ImageNet pre-trained network. I’ll conclude with some of the life-cycle challenges that we need to address to make AI systems widely applicable.</span></p></font></span><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><span><font face="arial, sans-serif"><p style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">References:</span></p></font></span></blockquote><span><font face="arial, sans-serif"><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:decimal;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Kyle G. Horton, Frank A. La Sorte, Daniel Sheldon, Tsung-Yu Lin, Kevin Winner, Garrett Bernstein, Subhransu Maji, Wesley M. Hochachka, and Andrew Farnsworth. </span><a href="https://www.nature.com/articles/s41558-019-0648-9" target="_blank" style="text-decoration-line:none"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"> </span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline">Phenology of nocturnal avian migration has shifted at the continental scale</span></a><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">, Nature Climate Change, Dec 2019</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:decimal;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Tsung‐Yu Lin, Kevin Winner, Garrett Bernstein, Abhay Mittal, Adriaan M. Dokter, Kyle G. Horton, Cecilia Nilsson, Benjamin M. Van Doren, Andrew Farnsworth, Frank A. La Sorte, Subhransu Maji and Daniel Sheldon.</span><a href="https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/2041-210X.13280" target="_blank" style="text-decoration-line:none"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline"> MistNet: Measuring historical bird migration in the US using archived weather radar data and convolutional neural networks</span></a><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">, Methods in Ecology and Evolution 10(11):1908-1922, Aug 2019</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:decimal;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, Pietro Perona,</span><a href="https://arxiv.org/abs/1902.03545" target="_blank" style="text-decoration-line:none"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"> </span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline">Task2Vec: Task Embedding for Meta-Learning</span></a><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">, ICCV 2019</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:decimal;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Jong-Chyi Su, Subhransu Maji, Bharath Hariharan,</span><a href="https://arxiv.org/abs/1910.03560" target="_blank" style="text-decoration-line:none"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"> </span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline">When Does Self-supervision Improve Few-shot Learning?</span></a><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"> arXiv:1910.03560, Oct 2019</span></p></li></ol><br></font></span><div><span id="gmail-m_3101035450978274288gmail-m_-840910721687754024gmail-m_-4770585944477743m_-947167383461397716gmail-m_9168911489282912842m_7082043948795332671gmail-m_3918681043343225718m_7776154040874388940gmail-docs-internal-guid-d34d626a-7fff-fa18-31b7-15dad8158c9a"><font face="arial, sans-serif"><span style="color:rgb(0,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Bio:</span><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> I am an Assistant Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst where I co-direct the Computer Vision Lab. I am also affiliated with the Center of Data Science and AWS AI. Prior to this I spent three years as a Research Assistant Professor at TTI Chicago, a philanthropically endowed academic institute in the University of Chicago campus. I obtained my Ph.D. in Computer Science from the University of California at Berkeley in 2011 and B.Tech. in Computer Science and Engineering from IIT Kanpur in 2006. For my work, I have received a Google graduate fellowship, NSF CAREER Award (2018), and a best paper honorable mention at CVPR 2018. I also serve on the editorial board of the International Journal of Computer Vision (IJCV). </span></font></span></div><div><br></div><div><b style="color:rgb(0,0,0)"><font face="arial, sans-serif">Host: </font></b><font face="arial, sans-serif" style="color:rgb(0,0,0)"><a href="mailto:greg@ttic.edu" target="_blank">Greg Shakhnarovich</a></font> </div><div><br></div><div> <br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><b><font color="#0b5394">Alicia McClarin</font></b><div><div><font color="#0b5394"><i>Toyota Technological Institute at Chicago</i></font></div><div><div><font color="#0b5394"><i>6045 S. Kenwood Ave., </i></font><i style="color:rgb(11,83,148)">Office 504</i></div><div><i style="color:rgb(11,83,148)">Chicago, IL 60637</i><br></div></div><div><i style="color:rgb(11,83,148)">773-834-3321</i><i style="color:rgb(11,83,148)"><br></i></div><div><a href="http://www.ttic.edu/" target="_blank"><font color="#0b5394"><i>www.ttic.edu</i></font></a></div></div></div></div></div></div></div></div></div></div></div></div>