Support The World's Smartest Network
×

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

DONATE
This site uses cookies.
Learn more.

×

This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.

12th Annual Machine Learning Symposium

Available via

LIVESTREAM

12th Annual Machine Learning Symposium

Friday, March 9, 2018

The New York Academy of Sciences, 7 World Trade Center, 250 Greenwich St Fl 40, New York, USA

Presented By

Machine Learning Discussion Group

The New York Academy of Sciences

 

Machine Learning, a subfield of computer science, involves the development of mathematical algorithms that discover knowledge from specific data sets, and then "learn" from the data in an iterative fashion that allows predictions to be made. Today, Machine Learning has a wide range of applications, including natural language processing, search engine optimization, medical diagnosis and treatment, financial fraud detection, and stock market analysis.

This symposium, the twelfth in an ongoing series presented by the Machine Learning Discussion Group at the New York Academy of Sciences, will feature Keynote Presentations from leading scientists in both applied and theoretical Machine Learning and Spotlight Talks, a series of short, early career investigator presentations across a variety of topics at the frontier of Machine Learning.

Livestream

Two of the keynote addresses for this meeting are available via Livestream. For full details, and to view the Livestreams, use the link below:

https://livestream.com/newyorkacademyofsciences

Registration

Member
$60
Nonmember Academia, Faculty, etc.
$105
Nonmember Corporate, Other
$160
Nonmember Not for Profit
$105
Nonmember Student, Undergrad, Grad, Fellow
$70
Member Student, Post-Doc, Fellow
$25

Scientific Organizing Committee

Corinna Cortes, PhD, Google Research
Corinna Cortes, PhD, Google Research
Elad Hazan, PhD, Princeton University
Elad Hazan, PhD, Princeton University
Tony Jebara, PhD, Columbia University
Tony Jebara, PhD, Columbia University
John Langford, PhD, Microsoft
John Langford, PhD, Microsoft
Naoki Abe, PhD, IBM Research
Naoki Abe, PhD, IBM Research
Patrick Haffner, PhD, Interactions Corporation
Patrick Haffner, PhD, Interactions Corporation
Alexander Rakhlin, PhD, University of Pennsylvania
Alexander Rakhlin, PhD, University of Pennsylvania
Jennifer L. Costley, PhD, The New York Academy of Sciences
Jennifer L. Costley, PhD, The New York Academy of Sciences
Mehryar Mohri, PhD, Courant Institute of Mathematical Sciences, New York University
Mehryar Mohri, PhD, Courant Institute of Mathematical Sciences, New York University
Robert Schapire, PhD, Microsoft Research
Robert Schapire, PhD, Microsoft Research

Keynote Speakers

Zeyuan Allen-Zhu, ScD, Microsoft
Zeyuan Allen-Zhu, ScD, Microsoft
Constantinos Daskalakis, PhD, Massachusetts Institute of Technology
Constantinos Daskalakis, PhD, Massachusetts Institute of Technology
Sergey Levine, PhD, UC Berkeley
Sergey Levine, PhD, UC Berkeley
Meredith Whittaker, AI Now Institute and Google Open Research
Meredith Whittaker, AI Now Institute and Google Open Research

Friday

March 09, 2018

Symposium Agenda

9:00 AM

Registration, Continental Breakfast, and Poster Set-up

10:00 AM

Welcome Remarks

10:10 AM

Improving Generative Adversarial Networks Using Game Theory and Statistics

Speaker

Constantinos Daskalakis, PhD (Keynote Speaker)
Massachusetts Institute of Technology
10:50 AM

Audience Q&A

11:05 AM

Communication-Efficient and Differentially-Private Distributed Gradient Descent

Speaker

Naman Agarwal, MS
Princeton University
11:10 AM

Training GANs with Optimism

Speaker

Andrew Ilyas
MIT
11:15 AM

Noise-Based Regularizers for Recurrent Neural Networks

Speaker

Adji B. Dieng, MPhil
Columbia University
11:20 AM

Spectrally-Normalized Margin Bounds for Neural Networks

Speaker

Dylan Foster
Cornell University
11:25 AM

Computational Challenges of Sample-Efficient Exploration in Reinforcement Learning with Function Approximation

Speaker

Nan Jiang, PhD
Microsoft Research
11:30 AM

ZigZag: A New Approach to Adaptive Online Learning

Speaker

Dylan Foster
Cornell University
11:35 AM

Networking Break and Poster Viewing

12:20 PM

Data Genesis: Examining and Accounting for the Data that Trains AI Systems

Speaker

Meredith Whittaker (Keynote Speaker)
AI Now Institute and Google Open Research
1:00 PM

Audience Q&A

1:15 PM

Networking Lunch and Poster Viewing

2:30 PM

Leverage Score Sampling for Faster Accelerated Regression and ERM

Speaker

Naman Agarwal, MS
Princeton University
2:35 PM

Parameter-Free Online Learning via Model Selection

Speaker

Dylan Foster
Cornell University
2:40 PM

PAC Reinforcement Learning with an Imperfect Model

Speaker

Nan Jiang, PhD
Microsoft Research
2:45 PM

Learning to Predict and Control Linear Dynamical Systems via Spectral Filtering

Speaker

Karan Singh
Princeton University and Google Brain
2:50 PM

Multiple-Source Adaptation with Cross-Entropy Loss

Speaker

Ningshan Zhang
New York University
2:55 PM

How to Swing By Saddle Points: Faster Non-Convex Optimization Than SGD

Speaker

Zeyuan Allen-Zhu, ScD (Keynote Speaker)
Microsoft

The diverse world of deep learning research has given rise to thousands of architectures for neural networks. However, to this date, the underlying training algorithms for neural networks are still stochastic gradient descent (SGD) and its heuristic variants.

In this talk, we present a new stochastic algorithm to train any smooth neural network to ε-approximate local minima, using O(ε^{−3.25}) backpropagations. The best provable result was O(ε^{−4}) by SGD before this work. More broadly, it finds ε−approximate local minima of any smooth nonconvex function using O(ε^{−3.25}) stochastic gradient computations.

3:35 PM

Audience Q&A

3:50 PM

Networking Break

4:05 PM

Machines that Learn by Doing

Speaker

Sergey Levine, PhD (Keynote Speaker)
UC Berkeley

Advances in machine learning have made it possible to build algorithms that can make complex and accurate inferences for open-world perception problems, such as recognizing objects in images or recognizing words in human speech. These advances have been enabled by improvements in models and algorithms, such as deep neural networks, advances in the amount of available computation and, crucially, the availability of large amounts of manually-labeled data. However, when we consider how we might build intelligent machines that can act, rather than just perceive, the requirement for massive human-labeled data becomes onerous and, in many cases, prohibitive. In this talk, I will discuss research in my group that aims to make learning fully autonomous, by enabling robots to improve continuously from experience that they collect on their own, either by attempting tasks in the real world, or simply by watching humans acting in their natural environment.

4:45 PM

Audience Q&A

5:00 PM

Closing Remarks and Awards

5:10 PM

Networking Reception

6:00 PM

Symposium Adjourns