[Theory] REMINDER 2pm talk : 10/25 Talks at TTIC: Malcolm Slaney, Google Machine Hearing Research
Mary Marre
mmarre at ttic.edu
Fri Oct 25 13:49:43 CDT 2019
*When: Friday, October 25th at 2:00 pm*
*Where:* TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526
*Who: * Malcolm Slaney, Google Machine Hearing Research
*Title: *Signal Processing and Machine Learning for Attention
*Abstract: *Our devices work best when they understand what we are doing or
trying to do. A large part of this problem is understanding to what we are
attending. I’d like to talk about how we can do this in the visual (easy)
and auditory (much harder and more interesting) domains. Eye tracking is a
good but imperfect signal. Audio attention is buried in the brain and
recent EEG (and ECoG and MEG) work gives us insight. These signal can be
use to improve the user interface for speech recognition and the auditory
environment. I’ll talk about using eye tracking to improve speech
recognition (yes!) and how we can use attention decoding to emphasize the
most important audio signals, and to get insight about the cognitive load
that our users are experiencing. Long term, I’ll argue that listening
effort is an important new metric for improving our interfaces. Listening
effort is often measured by evaluating performance on a dual-task
experiment, which involves divided attention.
*Bio:* BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a
research scientist in the AI Machine Hearing Group at Google. He is an
Adjunct Professor at Stanford CCRMA, where he has led the Hearing Seminar
for more than 20 years, and an Affiliate Faculty in the Electrical
Engineering Department at the University of Washington. He has served as an
Associate Editor of IEEE Transactions on Audio, Speech and Signal
Processing and IEEE Multimedia Magazine. He has given successful tutorials
at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal
Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP,
“Web-Scale Multimedia Data” at ACM Multimedia 2010, and "Sketching Tools
for Big Data Signal Processing” at ICASSP 2019. He is a coauthor, with A.
C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”.
This book was republished by SIAM in their “Classics in Applied
Mathematics” Series. He is coeditor, with Steven Greenberg, of the book
“Computational Models of Auditory Function.” Before joining Google, Dr.
Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research,
Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo!
Research, and Microsoft Research. For many years, he has led the auditory
group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s
recent work is on understanding attention and general audio perception. He
is a Senior Member of the ACM and a Fellow of the IEEE.
*Host:* Karen Livescu <klivescu at ttic.edu>
Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL 60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*
On Thu, Oct 24, 2019 at 6:57 PM Mary Marre <mmarre at ttic.edu> wrote:
> *When:* Friday, October 25th at 2:00 pm
>
>
>
> *Where:* TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526
>
>
>
> *Who: * Malcolm Slaney, Google Machine Hearing Research
>
>
> *Title: *Signal Processing and Machine Learning for Attention
>
> *Abstract: *Our devices work best when they understand what we are doing
> or trying to do. A large part of this problem is understanding to what we
> are attending. I’d like to talk about how we can do this in the visual
> (easy) and auditory (much harder and more interesting) domains. Eye
> tracking is a good but imperfect signal. Audio attention is buried in the
> brain and recent EEG (and ECoG and MEG) work gives us insight. These signal
> can be use to improve the user interface for speech recognition and the
> auditory environment. I’ll talk about using eye tracking to improve speech
> recognition (yes!) and how we can use attention decoding to emphasize the
> most important audio signals, and to get insight about the cognitive load
> that our users are experiencing. Long term, I’ll argue that listening
> effort is an important new metric for improving our interfaces. Listening
> effort is often measured by evaluating performance on a dual-task
> experiment, which involves divided attention.
>
> *Bio:* BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a
> research scientist in the AI Machine Hearing Group at Google. He is an
> Adjunct Professor at Stanford CCRMA, where he has led the Hearing Seminar
> for more than 20 years, and an Affiliate Faculty in the Electrical
> Engineering Department at the University of Washington. He has served as an
> Associate Editor of IEEE Transactions on Audio, Speech and Signal
> Processing and IEEE Multimedia Magazine. He has given successful tutorials
> at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal
> Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP,
> “Web-Scale Multimedia Data” at ACM Multimedia 2010, and "Sketching Tools
> for Big Data Signal Processing” at ICASSP 2019. He is a coauthor, with A.
> C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”.
> This book was republished by SIAM in their “Classics in Applied
> Mathematics” Series. He is coeditor, with Steven Greenberg, of the book
> “Computational Models of Auditory Function.” Before joining Google, Dr.
> Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research,
> Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo!
> Research, and Microsoft Research. For many years, he has led the auditory
> group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s
> recent work is on understanding attention and general audio perception. He
> is a Senior Member of the ACM and a Fellow of the IEEE.
>
>
> *Host:* Karen Livescu <klivescu at ttic.edu>
>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL 60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Sun, Oct 20, 2019 at 4:08 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>> *When:* Friday, October 25th at 2:00 pm
>>
>>
>>
>> *Where:* TTIC, 6045 S. Kenwood Avenue, 5th Floor, Room 526
>>
>>
>>
>> *Who: * Malcolm Slaney, Google Machine Hearing Research
>>
>>
>> *Title: *Signal Processing and Machine Learning for Attention
>>
>> *Abstract: *Our devices work best when they understand what we are doing
>> or trying to do. A large part of this problem is understanding to what we
>> are attending. I’d like to talk about how we can do this in the visual
>> (easy) and auditory (much harder and more interesting) domains. Eye
>> tracking is a good but imperfect signal. Audio attention is buried in the
>> brain and recent EEG (and ECoG and MEG) work gives us insight. These signal
>> can be use to improve the user interface for speech recognition and the
>> auditory environment. I’ll talk about using eye tracking to improve speech
>> recognition (yes!) and how we can use attention decoding to emphasize the
>> most important audio signals, and to get insight about the cognitive load
>> that our users are experiencing. Long term, I’ll argue that listening
>> effort is an important new metric for improving our interfaces. Listening
>> effort is often measured by evaluating performance on a dual-task
>> experiment, which involves divided attention.
>>
>> *Bio:* BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a
>> research scientist in the AI Machine Hearing Group at Google. He is an
>> Adjunct Professor at Stanford CCRMA, where he has led the Hearing Seminar
>> for more than 20 years, and an Affiliate Faculty in the Electrical
>> Engineering Department at the University of Washington. He has served as an
>> Associate Editor of IEEE Transactions on Audio, Speech and Signal
>> Processing and IEEE Multimedia Magazine. He has given successful tutorials
>> at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal
>> Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP,
>> “Web-Scale Multimedia Data” at ACM Multimedia 2010, and "Sketching Tools
>> for Big Data Signal Processing” at ICASSP 2019. He is a coauthor, with A.
>> C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”.
>> This book was republished by SIAM in their “Classics in Applied
>> Mathematics” Series. He is coeditor, with Steven Greenberg, of the book
>> “Computational Models of Auditory Function.” Before joining Google, Dr.
>> Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research,
>> Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo!
>> Research, and Microsoft Research. For many years, he has led the auditory
>> group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s
>> recent work is on understanding attention and general audio perception. He
>> is a Senior Member of the ACM and a Fellow of the IEEE.
>>
>>
>> *Host:* Karen Livescu <klivescu at ttic.edu>
>>
>>
>> Mary C. Marre
>> Administrative Assistant
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL 60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20191025/bd23059a/attachment-0001.html>
More information about the Theory
mailing list