[Theory] Re: REMINDER: 3/5 Talks at TTIC: Hongyang Zhang, Carnegie Mellon

Mary Marre mmarre at ttic.edu
Tue Mar 5 10:12:38 CST 2019


When:     Tuesday, March 5th at *11:00 am*

Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526

Who:       Hongyang Zhang, Carnegie Mellon


*Title:      *Non-Convex Learning: Optimization and Robustness

*Abstract:* Non-convex learning has received significant attention in
recent years due to its wide applications in various problems. Despite a
large amount of work on non-convex learning, two fundamental questions
remain unresolved, from computational and security aspects.

>From the computational aspects, one of the long-standing questions is
designing computationally efficient algorithms toward global optimality of
non-convex optimization in polynomial time. In this talk, we will analyze
the loss landscape of two classic non-convex problems --- matrix
factorization and deep learning. We show that the two problems enjoy small
duality gap. Duality gap is a natural measure of non-convexity of a
problem. The analysis thus bridges non-convex learning with its convex
counterpart and explains why matrix factorization and deep learning are
intrinsically not hard to optimize.

>From the security aspect, non-convex learning, especially deep neural
network, is non-robust to adversarial examples. In this talk, we identify a
trade-off between robustness and accuracy that serves as a guiding
principle in the design of defenses against adversarial examples. We
quantify the trade-off in terms of the gap between the risk for adversarial
examples and the risk for non-adversarial examples, and give an optimal
upper bound on this quantity in terms of classification-calibrated loss.
Inspired by our theoretical analysis, we design a new defense method,
TRADES, to trade adversarial robustness off against accuracy. The proposed
algorithm is the foundation of our entry to the NeurIPS 2018 Adversarial
Vision Challenge in which we won the 1st place out of 1,995 submissions,
surpassing the runner-up approach by 11.41%.

Joint work with Nina Balcan, Laurent El Ghaoui, Jiantao Jiao, Michael I.
Jordan, Yingyu Liang, David P. Woodruff, Eric P. Xing, and Yaodong Yu


Host: Avrim Blum <avrim at ttic.edu>



Mary C. Marre
Administrative Assistant
*Toyota Technological Institute*
*6045 S. Kenwood Avenue*
*Room 517*
*Chicago, IL  60637*
*p:(773) 834-1757*
*f: (773) 357-6970*
*mmarre at ttic.edu <mmarre at ttic.edu>*


On Mon, Mar 4, 2019 at 5:11 PM Mary Marre <mmarre at ttic.edu> wrote:

>
> When:     Tuesday, March 5th at *11:00 am*
>
> Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526
>
> Who:       Hongyang Zhang, Carnegie Mellon
>
>
> *Title:      *Non-Convex Learning: Optimization and Robustness
>
> *Abstract:* Non-convex learning has received significant attention in
> recent years due to its wide applications in various problems. Despite a
> large amount of work on non-convex learning, two fundamental questions
> remain unresolved, from computational and security aspects.
>
> From the computational aspects, one of the long-standing questions is
> designing computationally efficient algorithms toward global optimality of
> non-convex optimization in polynomial time. In this talk, we will analyze
> the loss landscape of two classic non-convex problems --- matrix
> factorization and deep learning. We show that the two problems enjoy small
> duality gap. Duality gap is a natural measure of non-convexity of a
> problem. The analysis thus bridges non-convex learning with its convex
> counterpart and explains why matrix factorization and deep learning are
> intrinsically not hard to optimize.
>
> From the security aspect, non-convex learning, especially deep neural
> network, is non-robust to adversarial examples. In this talk, we identify a
> trade-off between robustness and accuracy that serves as a guiding
> principle in the design of defenses against adversarial examples. We
> quantify the trade-off in terms of the gap between the risk for adversarial
> examples and the risk for non-adversarial examples, and give an optimal
> upper bound on this quantity in terms of classification-calibrated loss.
> Inspired by our theoretical analysis, we design a new defense method,
> TRADES, to trade adversarial robustness off against accuracy. The proposed
> algorithm is the foundation of our entry to the NeurIPS 2018 Adversarial
> Vision Challenge in which we won the 1st place out of 1,995 submissions,
> surpassing the runner-up approach by 11.41%.
>
> Joint work with Nina Balcan, Laurent El Ghaoui, Jiantao Jiao, Michael I.
> Jordan, Yingyu Liang, David P. Woodruff, Eric P. Xing, and Yaodong Yu
>
>
> Host: Avrim Blum <avrim at ttic.edu>
>
>
> Mary C. Marre
> Administrative Assistant
> *Toyota Technological Institute*
> *6045 S. Kenwood Avenue*
> *Room 517*
> *Chicago, IL  60637*
> *p:(773) 834-1757*
> *f: (773) 357-6970*
> *mmarre at ttic.edu <mmarre at ttic.edu>*
>
>
> On Tue, Feb 26, 2019 at 11:05 PM Mary Marre <mmarre at ttic.edu> wrote:
>
>>
>> When:     Tuesday, March 5th at *11:00 am*
>>
>> Where:    TTIC, 6045 S Kenwood Avenue, 5th Floor, Room 526
>>
>> Who:       Hongyang Zhang, Carnegie Mellon
>>
>>
>> *Title:      *Non-Convex Learning: Optimization and Robustness
>>
>> *Abstract:* Non-convex learning has received significant attention in
>> recent years due to its wide applications in various problems. Despite a
>> large amount of work on non-convex learning, two fundamental questions
>> remain unresolved, from computational and security aspects.
>>
>> From the computational aspects, one of the long-standing questions is
>> designing computationally efficient algorithms toward global optimality of
>> non-convex optimization in polynomial time. In this talk, we will analyze
>> the loss landscape of two classic non-convex problems --- matrix
>> factorization and deep learning. We show that the two problems enjoy small
>> duality gap. Duality gap is a natural measure of non-convexity of a
>> problem. The analysis thus bridges non-convex learning with its convex
>> counterpart and explains why matrix factorization and deep learning are
>> intrinsically not hard to optimize.
>>
>> From the security aspect, non-convex learning, especially deep neural
>> network, is non-robust to adversarial examples. In this talk, we identify a
>> trade-off between robustness and accuracy that serves as a guiding
>> principle in the design of defenses against adversarial examples. We
>> quantify the trade-off in terms of the gap between the risk for adversarial
>> examples and the risk for non-adversarial examples, and give an optimal
>> upper bound on this quantity in terms of classification-calibrated loss.
>> Inspired by our theoretical analysis, we design a new defense method,
>> TRADES, to trade adversarial robustness off against accuracy. The proposed
>> algorithm is the foundation of our entry to the NeurIPS 2018 Adversarial
>> Vision Challenge in which we won the 1st place out of 1,995 submissions,
>> surpassing the runner-up approach by 11.41%.
>>
>> Joint work with Nina Balcan, Laurent El Ghaoui, Jiantao Jiao, Michael I.
>> Jordan, Yingyu Liang, David P. Woodruff, Eric P. Xing, and Yaodong Yu
>>
>>
>> Host: Avrim Blum <avrim at ttic.edu>
>>
>>
>>
>>
>>
>> Mary C. Marre
>> Administrative Assistant
>> *Toyota Technological Institute*
>> *6045 S. Kenwood Avenue*
>> *Room 517*
>> *Chicago, IL  60637*
>> *p:(773) 834-1757*
>> *f: (773) 357-6970*
>> *mmarre at ttic.edu <mmarre at ttic.edu>*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.uchicago.edu/pipermail/theory/attachments/20190305/7eb17de9/attachment-0001.html>


More information about the Theory mailing list