This site uses cookies.
Learn more.

×

This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

digital representation of machine learning

14th Annual Machine Learning Symposium

Friday, March 13, 2020

The New York Academy of Sciences, 7 World Trade Center, 250 Greenwich St Fl 40, New York

Presented By

The New York Academy of Sciences

 

Machine Learning, a subfield of computer science, involves the development of mathematical algorithms that discover knowledge from specific data sets, and then "learn" from the data in an iterative fashion that allows predictions to be made. Today, Machine Learning has a wide range of applications, including natural language processing, search engine optimization, medical diagnosis and treatment, financial fraud detection, and stock market analysis.

This symposium, the fourteenth in an ongoing series presented by the Machine Learning Discussion Group at the New York Academy of Sciences, will feature Keynote Presentations from leading researchers in both applied and theoretical Machine Learning and Spotlight Talks, a series of short, early career investigator presentations across a variety of topics at the frontier of Machine Learning.

Notice to Abstract Submitters:

The call for abstracts is now closed.

  • Abstract acceptance notifications will be made on February 13 -14, 2020
  • Accepted abstract submitters will be allowed to register at the Early Bird Rate
  • Due to the large number of submissions, written feedback on abstracts will not be provided. Thank you for your understanding.

Registration

Member
By 02/10/2020
$75
After 02/10/2020
$105
Nonmember Academia, Faculty, etc.
By 02/10/2020
$150
After 02/10/2020
$210
Nonmember Corporate, Other
By 02/10/2020
$220
After 02/10/2020
$300
Nonmember Not for Profit
By 02/10/2020
$150
After 02/10/2020
$210
Nonmember Student, Undergrad, Grad, Fellow
By 02/10/2020
$80
After 02/10/2020
$115
Member Student, Post-Doc, Fellow
By 02/10/2020
$40
After 02/10/2020
$55
Earlybird Registration:
0
days
left

Keynote Speakers

Scientific Organizing Committee

Jennifer L. Costley, PhD
Jennifer L. Costley, PhD

The New York Academy of Sciences

Patrick Haffner, PhD
Patrick Haffner, PhD

Interactions Corporation

Elad Hazan, PhD
Elad Hazan, PhD

Princeton University

Tony Jebara, PhD
Tony Jebara, PhD

Spotify and Columbia University

John Langford, PhD
John Langford, PhD

Microsoft Research

Mehryar Mohri, PhD
Mehryar Mohri, PhD

NYU Courant Institute

Robert Shapire, PhD
Robert Shapire, PhD

Microsoft Research

Presenting Partners

Friday

March 13, 2020

9:00 AM

Registration, Continental Breakfast, and Poster Set-up

10:00 AM

Welcome Remarks

Keynote Address 1

10:10 AM

Advances in AI Systems

Speaker

Joseph Gonzalez, PhD
UC Berkeley
10:50 AM

Audience Q&A

Spotlight Talks: Session 1

11:05 AM

Spotlight Talk 1

11:10 AM

Spotlight Talk 2

11:15 AM

Spotlight Talk 3

11:20 AM

Spotlight Talk 4

11:25 AM

Spotlight Talk 5

11:30 AM

Networking Break and Poster Viewing

Keynote Address 2

12:20 PM

Learning from Deep Learning

Speaker

Mikhail Belkin, PhD
Ohio State University

Recent empirical successes of deep learning have exposed significant gaps in our fundamental understanding of learning mechanisms.  Modern best practices for model selection are in direct contradiction to the methodologies suggested by classical analyses.  Similarly, the efficiency of local methods, such as SGD, widely used in training modern models, appeared difficult to explain by standard optimization analyses and intuitions.

In this talk I will discuss the emerging understanding of some remarkable statistical and mathematical phenomena uncovered and highlighted by the practice of deep learning and their implications for inference and learning in general.

1:00 PM

Audience Q&A

1:15 PM

Networking Lunch and Poster Viewing

Spotlight Talks: Session 2

2:30 PM

Spotlight Talk 6

2:35 PM

Spotlight Talk 7

2:40 PM

Spotlight Talk 8

2:45 PM

Spotlight Talk 9

2:50 PM

Spotlight Talk 10

Keynote Address 3

2:55 PM

Learning in Repeated Games

Speaker

Eva Tardos, PhD
Cornell University
3:35 PM

Audience Q&A

3:50 PM

Networking Break

Keynote Address 4

4:05 PM

Pre-trained Language Representations and Augmentations

Speaker

Emily Pitler, PhD
Google

Recent advances in language representations pre-trained on large amounts of unlabeled text have dramatically improved the accuracy and sample efficiency of machine learned models on a range of natural language understanding benchmarks.  This talk will give an overview of one such model, BERT (Bidirectional Encoder Representations from Transformers), and experiments indicating what linguistic and cross-lingual phenomena it does (and does not) encode. We’ll build on the cross-lingual findings to show that multilingual BERT can be used to train state-of-the-art yet efficient multilingual models. There are open questions around how to move beyond the setting of single self-contained passages when using these models. We’ll highlight two augmentations that add inductive biases: a specialized pre-training strategy that allows joint retrieval and question answering, and access to shallow executable modules that give the model the ability to do rudimentary numerical reasoning.

4:45 PM

Audience Q&A

Closing Remarks and Awards

5:00 PM

Best Poster Presentation

5:05 PM

Spotlight Talk Award Presentation

5:10 PM

Networking Reception

6:00 PM

Symposium Adjourns