14th Annual Machine Learning Symposium
Friday, March 13, 2020
The New York Academy of Sciences, 7 World Trade Center, 250 Greenwich St Fl 40, New York
The New York Academy of Sciences
Machine Learning, a subfield of computer science, involves the development of mathematical algorithms that discover knowledge from specific data sets, and then "learn" from the data in an iterative fashion that allows predictions to be made. Today, Machine Learning has a wide range of applications, including natural language processing, search engine optimization, medical diagnosis and treatment, financial fraud detection, and stock market analysis.
This symposium, the fourteenth in an ongoing series presented by the Machine Learning Discussion Group at the New York Academy of Sciences, will feature Keynote Presentations from leading researchers in both applied and theoretical Machine Learning and Spotlight Talks, a series of short, early career investigator presentations across a variety of topics at the frontier of Machine Learning.
Notice to Abstract Submitters:
The call for abstracts is now closed.
- Abstract acceptance notifications will be made on February 13 -14, 2020
- Accepted abstract submitters will be allowed to register at the Early Bird Rate
- Due to the large number of submissions, written feedback on abstracts will not be provided. Thank you for your understanding.
Scientific Organizing Committee
The New York Academy of Sciences
Spotify and Columbia University
NYU Courant Institute
March 13, 2020
Registration, Continental Breakfast, and Poster Set-up
Keynote Address 1
Advances in AI Systems
Spotlight Talks: Session 1
Spotlight Talk 1
Spotlight Talk 2
Spotlight Talk 3
Spotlight Talk 4
Spotlight Talk 5
Networking Break and Poster Viewing
Keynote Address 2
Learning from Deep Learning
Recent empirical successes of deep learning have exposed significant gaps in our fundamental understanding of learning mechanisms. Modern best practices for model selection are in direct contradiction to the methodologies suggested by classical analyses. Similarly, the efficiency of local methods, such as SGD, widely used in training modern models, appeared difficult to explain by standard optimization analyses and intuitions.
In this talk I will discuss the emerging understanding of some remarkable statistical and mathematical phenomena uncovered and highlighted by the practice of deep learning and their implications for inference and learning in general.
Networking Lunch and Poster Viewing
Spotlight Talks: Session 2
Spotlight Talk 6
Spotlight Talk 7
Spotlight Talk 8
Spotlight Talk 9
Spotlight Talk 10
Keynote Address 3
Learning in Repeated Games
Keynote Address 4
Pre-trained Language Representations and Augmentations
Recent advances in language representations pre-trained on large amounts of unlabeled text have dramatically improved the accuracy and sample efficiency of machine learned models on a range of natural language understanding benchmarks. This talk will give an overview of one such model, BERT (Bidirectional Encoder Representations from Transformers), and experiments indicating what linguistic and cross-lingual phenomena it does (and does not) encode. We’ll build on the cross-lingual findings to show that multilingual BERT can be used to train state-of-the-art yet efficient multilingual models. There are open questions around how to move beyond the setting of single self-contained passages when using these models. We’ll highlight two augmentations that add inductive biases: a specialized pre-training strategy that allows joint retrieval and question answering, and access to shallow executable modules that give the model the ability to do rudimentary numerical reasoning.