6th Annual Machine Learning Symposium

6th Annual Machine Learning Symposium

Friday, October 21, 2011

The New York Academy of Sciences

Presented By

 

Machine learning, the study of computer algorithms that improve automatically through experience, has a wide spectrum of applications, including natural language processing, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, and stock market analysis.

The Machine Learning Discussion Group at the New York Academy of Sciences holds an annual symposium each fall to discuss advanced research related to such topics. The aim of this series is to continue to build a community of leading scientists in machine learning from the New York City area's academic, government, and industrial institutions by convening and promoting the exchange of ideas in a neutral setting. Top scientists in both applied and theoretical machine learning are invited to present their research.

In addition, several submitted abstracts will be selected for oral presentations as well as for presentation as papers in the poster session. Based on these "Spotlight" talks, a "best student paper" will be chosen. The student winner will be announced at the end of the day-long symposium.

The symposium will be followed by a series of short presentations by tech startups, sponsored by hackNY, an organization that connects math and computer science students with emerging enterprises. Attendance is open to all but space is limited.

Registration Pricing

Member$20
Student / Postdoc / Fellow Member$5
Student / Postdoc / Fellow Nonmember$15
Nonmember Academic$35
Nonmember Not for Profit$35
Nonmember Corporate$55

 

Gold Sponsor

Past Machine Learning Symposia

Agenda

* Presentation times are subject to change.


Friday, October 21, 2011

9:30 AM

Breakfast & Poster Set-up

10:00 AM

Opening Remarks

10:10 AM

Keynote Talk
Stochastic Algorithms for One-Pass Learning
Léon Bottou, PhD, Microsoft adCenter

10:55 AM

Spotlight Talks

 

Opportunistic Approachability
Andrey Bernstein, Columbia University

 

Large-Scale Sparse Kernel Logistic Regression with a comparative study on optimization algorithms
Shyam S. Chandramouli, Columbia University

 

Online Clustering with Experts
Anna Choromanska, PhD, Columbia University

 

Efficient Learning of Word Embeddings via Canonical Correlation Analysis
Paramveer Dhillon, University of Pennsylvania

 

A Reliable, Effective Terascale Linear Learning System
Miroslav Dudik, PhD, Yahoo! Research

11:20 AM

Networking and Poster Session

12:05 PM

Keynote Talk
Online Learning without a Learning Rate Parameter
Yoav Freund, PhD, University of California, San Diego

1:00 PM

Networking Lunch

2:30 PM

Spotlight Talks

 

Large-scale Collection Threading using Structured k-DPPs
Jennifer Gillenwater, University of Pennsylvania

 

Online Learning for Mixed Membership Network Models
Prem Gopalan, Princeton University

 

Planning in Reward Rich Domains via PAC Bandits
Sergiu Goschin, Rutgers University

 

The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
Matthew D. Hoffman, PhD,Columbia University

 

Place Recommendation with Implicit Spatial Feedback
Berk Kapicioglu, Princeton University, Sense Networks

3:00 PM

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
Stephen Boyd, PhD, Stanford University

3:45 PM

Spotlight Talks

 

Hierarchically Supervised Latent Dirichlet Allocation
Adler Perotte, MD,Columbia University

 

Image Super-Resolution via Dictionary Learning
Gungor Polatkan, Princeton University

 

MirroRank: Convex Aggregation and Online Ranking with the Mirror Descent
Benoit Rostykus, Ecole Centrale Paris

 

Preserving Proximity Relations and Minimizing Edge-crossings in Graph Embeddings
Amina Shabbeer, Rensselaer Polytechnic Institute

 

A Reinforcement Learning Approach to Variational Inference
David Wingate, PhD,Massachusetts Institute of Technology

4:10 PM

Networking and Poster Session

5:00 PM

Student Award Winner Announcement & Closing Remarks

5:15 PM

End of Program

5:30 PM

HackNY Presentation for Students
Foursquare, Hunch, Intent Media, Etsy, Media6Degrees, Flurry

Speakers

Organizers

Naoki Abe, PhD

IBM Research

Corinna Cortes, PhD

Google

Patrick Haffner, PhD

AT&T Research

Tony Jebara, PhD

Columbia University

John Langford, PhD

Yahoo! Research

Mehryar Mohri, PhD

Courant Institute of Mathematical Sciences, NYU

Robert Schapire, PhD

Princeton University

Speakers

Léon Bottou, PhD

Microsoft adCenter

Léon Bottou received the Diplôme d'Ingénieur de l'école Polytechnique (X84) in 1987, the Magistère de Mathématiques Fondamentales et Appliquées et d'Informatique from école Normale Superieure in 1988, the Diplôme d'études Approndies in Computer Science in 1988, and a PhD in Computer Science from LRI, Université de Paris-Sud in 1991.

After his PhD, Bottou joined AT&T Bell Laboratories from 1991 to 1992. He then became chairman of Neuristique, a small company pioneering machine learning for data mining applications. He returned to AT&T Labs from 1995 to 2002 and NEC Labs America at Princeton from 2002 to March 2010. He joined the Science Team of Microsoft Online Service Division in April 2010.

Bottou's primary research interest is machine learning. His contributions to the field cover both theory and applications, with a particular interest for large-scale learning. Bottou's secondary research interest is data compression and coding. His best known contribution in this field is the DjVu document compression technology. Bottou has published over 80 papers and won the 2007 New York Academy of Sciences Blavatnik Award for Young Scientists. He is serving or has served on the boards of the Journal of Machine Learning Research and IEEE Transactions on Pattern Analysis and Machine Intelligence.

Stephen P. Boyd, PhD

Stanford University

Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering in the Information Systems Laboratory at Stanford University. He received the A.B. degree in Mathematics from Harvard University in 1980, and the Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley, in 1985, and then joined the faculty at Stanford. His current research focus is on convex optimization applications in control, signal processing, and circuit design.

Yoav Freund, PhD

University of California

Yoav Freund is a professor of Computer Science and Engineering at UC San Diego. His work is in the area of machine learning, computational statistics information theory and their applications. He is best known for his joint work with Dr. Robert Schapire on the Adaboost algorithm. For this work they were awarded the 2003 Gödel prize in Theoretical Computer Science, as well as the Kanellakis Prize in 2004.

Sponsors

For sponsorship opportunities contact Brooke Grindlinger at brindlinger@nyas.org or call 212.298.8625

Gold Sponsor

Academy Friends

Google

Yahoo!


Abstracts

Stochastic Algorithms for One-Pass Learning
Léon Bottou, Microsoft adCenter

The goal of the presentation is to describe practical stochastic gradient algorithms that process each training example only once, yet asymptotically match the performance of the true optimum. This statement needs, of course, to be made more precise. To achieve this, we'll review the works of Nevel'son and Has'minskij (1972), Fabian (1973, 1978), Murata & Amari (1998), Bottou & LeCun (2004), Polyak & Juditsky (1992), Wei Xu (2010), and Bach & Moulines (2011). We will then show how these ideas lead to practical algorithms that not only represent a new state of the art but are also arguably optimal.

Online Learning without a Learning Rate Parameter
Yoav Freund, PhD, University of California, San Diego

Online learning is an approach to statistical inference based on the idea of playing a repeated game. A "master" algorithm recieves the prediction of N experts before making it's own prediction. Then the outcome is revealed and experts and master suffer a loss.

Algorithms have been developed for which the regret, the difference between the cumulative loss of the master and the cumulative loss of the best expert is bounded uniformly over all sequences of expert predictions and outcome.

The most successful algorithms of this type are the exponential weights algorithms discovered by Littlestone and Warmuth and refined by many others. The exponential weights algorithm has a parameter, the learning rate, which has to be tuned appropriately to achieve the best
bounds. This tuning typically depends on the number of experts and on the cumulative loss of the best expert. We describe a new algorithm - NormalHedge, which has no parameter and achieves comparable bounds to tuned exponential weights algorithm.

As the algorithm does not depend on the number of experts it can be used effectively when the set of experts grows as a function of time and when the set of experts is uncountably infinite.

In addition, the algorithm has a natural extension for continuous time and has a very tight analysis when the cumulative loss is described by an Ito process.

This is joint work with Kamalika Chaudhuri and Daniel Hsu

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
Stephen P. Boyd, Stanford University

Problems in areas such as machine learning and dynamic optimization on a large network lead to extremely large convex optimization problems, with problem data stored in a decentralized way, and processing elements distributed across a network. We argue that the alternating direction method of multipliers is well suited to such problems. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for $\ell_1$ problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to statistical and machine learning problems such as the lasso and support vector machines, and to dynamic energy management problems arising in the smart grid.

 

Travel & Lodging

Our Location

The New York Academy of Sciences

7 World Trade Center
250 Greenwich Street, 40th floor
New York, NY 10007-2157
212.298.8600

Directions to the Academy

Hotels Near 7 World Trade Center

Recommended partner hotel

Club Quarters, World Trade Center
140 Washington Street
New York, NY 10006
Phone: 212.577.1133

The New York Academy of Sciences is a member of the Club Quarters network, which offers significant savings on hotel reservations to member organizations. Located opposite Memorial Plaza on the south side of the World Trade Center, Club Quarters, World Trade Center is just a short walk to the Academy.

Use Club Quarters Reservation Password NYAS to reserve your discounted accommodations online.

Other nearby hotels

Millenium Hilton

212.693.2001

Marriott Financial Center

212.385.4900

Club Quarters, Wall Street

212.269.6400

Eurostars Wall Street Hotel

212.742.0003

Gild Hall, Financial District

212.232.7700

Wall Street Inn

212.747.1500

Ritz-Carlton New York, Battery Park

212.344.0800