
7th Annual Machine Learning Symposium
Friday, October 19, 2012
Machine learning, the study of computer algorithms that improve automatically through experience, has a wide spectrum of applications, including natural language processing, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, and stock market analysis.
The Machine Learning Discussion Group at the New York Academy of Sciences holds an annual symposium each fall to discuss advanced research related to such topics. The aim of this series is to continue to build a community of leading scientists in machine learning from the New York City area's academic, government, and industrial institutions by convening and promoting the exchange of ideas in a neutral setting. Top scientists in both applied and theoretical machine learning are invited to present their research.
In addition, several submitted abstracts will be selected for oral presentations as well as for presentation as papers in the poster session. Based on these "Spotlight" talks, a "best student paper" will be chosen. The student winner will be announced at the end of the day-long symposium.
Registration Pricing
Member | $25 |
Student/Postdoc Member | $10 |
Nonmember (Academia) | $60 |
Nonmember (Corporate) | $80 |
Nonmember (Non-profit) | $60 |
Nonmember (Student / Postdoc / Resident / Fellow) | $40 |
Agenda
* Presentation times are subject to change.
Friday October 19, 2012 | |
9:30 AM | Breakfast & Poster Set-up |
10:00 AM | Opening Remarks Tribute to David L. Waltz |
10:10 AM | Keynote Talk — Problem of Empirical Inference in Machine Learning and Philosophy of Science |
11:05 AM | Spotlight Talks Majorization for Conditional Random Fields and Latent Likelihoods Realtime Online Spatiotemporal Topics for Navigation Summaries Scaling Up Mixed-Membership Stochastic Blockmodels to Massive Networks Place Models for Sparse Location Prediction Efficient Time Series Classification with Multivariate Similarity Kernels |
11:30 AM | Networking and Poster Session |
12:20 PM | Keynote Talk — Large-scale model selection problems and computational oracle inequalities |
1:10 PM | Networking Lunch |
2:30 PM | Spotlight Talks Simulation, Learning and Optimization Techniques in Watson's Jeopardy! Game Strategies Compact Hyperplane Hashing with Bilinear Functions Collaborative Denoising of Multi-Subject fMRI Data MAP Inference in Chains using Column Generation Sparse Reinforcement Learning via Efficient First-order Optimization Methods |
3:00 PM | Keynote Talk — Learning matrix decomposition structures |
3:45 PM | Spotlight Talks Capturing Lexical Variation in Topic Models with Inverse Regression Improving Training Speed of Deep Belief Networks for Large Speech Tasks Adaptive Learning Rates for Stochastic Gradients Tradeoffs in Improved Screening of Lasso Problems Online Learning with Pairwise Loss Functions |
4:10 PM | Networking and Poster Session |
4:50 PM | Student Award Winner Announcement & Closing Remarks |
5:00 PM | End of Machine Learning Symposium |
5:15 PM | Machine Learning Careers in NYC Startups
|
7:00 PM | End of Program |
Speakers
Keynote Speakers
Peter Bartlett, PhD
University of California, Berkeley
Peter Bartlett is professor in the Computer Science Division and Department of Statistics at the University of California at Berkeley, and professor in Mathematical Sciences at the Queensland University of Technology. He has been a professor in the Research School of Information Sciences and Engineering at the Australian National University, a Miller Institute Visiting Research Professor in Statistics and Computer Science at U.C. Berkeley, and an honorary professor at the University of Queensland. He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia in 2001, and was an IMS Medallion Lecturer in 2008, and an IMS Fellow and Australian Laureate Fellow in 2011. His research interests include machine learning, statistical learning theory, and adaptive control.
William Freeman, PhD
Massachusetts Institute of Technology
BIO: William Freeman is a Professor of Computer Science at the Massachusetts Institute of Technology, and Associate Head of the Dept. of Electrical Engineering and Computer Science. His research interests include machine learning applied to computer vision and graphics, and computational photography. He worked at Polaroid, a company that made "film" cameras, developing image processing algorithms for electronic cameras and printers. In 1987–88, he was a Foreign Expert at the Taiyuan University of Technology, China. For 9 years he worked at Mitsubishi Electric Research Labs (MERL), in Cambridge, MA, as Sr. Research Scientist and Associate Director. He holds 30 patents and is an IEEE Fellow. A hobby is flying cameras in kites. Dr. Freeman was the program co-chair for the International Conference on Computer Vision (ICCV) in 2005, and will be the program co-chair for Computer Vision and Pattern Recognition (CVPR) in 2013.
Vladimir Vapnik, PhD
Columbia University and NEC-labs
Vladimir Naumovich Vapnik is one of the main developers of Vapnik–Chervonenkis theory. He was born in the Soviet Union. He received his master's degree in mathematics at the Uzbek State University, Samarkand, Uzbek SSR in 1958 and PhD in statistics at the Institute of Control Sciences, Moscow in 1964. He worked at this institute from 1961 to 1990 and became Head of the Computer Science Research Department. At the end of 1990, he moved to the USA and joined the Adaptive Systems Research Department at AT&T Bell Labs in Holmdel, New Jersey. The group later became the Image Processing Research Department of AT&T Laboratories when AT&T spun off Lucent Technologies in 1996. Vapnik Left AT&T in 2002 and joined NEC Laboratories in Princeton, New Jersey, where he currently works in the Machine Learning group. He also holds a Professor of Computer Science and Statistics position at Royal Holloway, University of London since 1995, as well as a position as Professor of Computer Science at Columbia University, New York City since 2003. He was inducted into the U.S. National Academy of Engineering in 2006. He received the 2005 Gabor Award, the 2008 Paris Kanellakis Award, the 2010 Neural Networks Pioneer Award, the 2012 IEEE Frank Rosenblatt Award, and the 2012 Benjamin Franklin Medal in Computer and Cognitive Science.
While at AT&T, Vapnik and his colleagues developed the theory of the support vector machine. They demonstrated its performance on a number of problems of interest to the machine learning community, including handwriting recognition.
Organizers
Corinna Cortes, PhD
Naoki Abe, PhD
IBM Research
Mehryar Mohri, PhD
Courant Institute of Mathematical Sciences, NYU
Michael Kearns, PhD
University of Pennsylvania
Patrick Haffner, PhD
AT&T Labs-Research
Tony Jebara, PhD
Columbia University
Robert Schapire, PhD
Princeton University
John Langford, PhD
Microsoft Research
Abstracts
Large-scale Model Selection Problems and Computational Oracle Inequalities
Peter Bartlett, PhD, University of California, Berkeley
In many large-scale, high-dimensional prediction problems, performance is limited by computational resources rather than sample size. In this setting, we consider the problem of model selection under computational constraints: given a particular computational budget, is it better to gather more data and estimate a simpler model, or gather less data and estimate a more complex model? The talk will first review classical results for the performance of model selection methods based on complexity penalization. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. We evaluate these methods via oracle inequalities, which show that the predictive accuracy is almost as good as the best bound that would have been achieved by any model in the hierarchy. We then focus on model selection with computational constraints, motivated by large scale problems. We introduce general model selection methods. In contrast to classical oracle inequalities, which show a near-optimal trade-off between approximation error and estimation error for a given sample size, we give computational oracle inequalities, which show that our methods give a near-optimal trade-off for a given amount of computation, that is, devoting all of our computational budget to the best model would not have led to a significant performance improvement.
Joint work with Alekh Agarwal and John Duchi.
Learning Matrix Decomposition Structures
Bill Freeman, PhD, Massachusetts Institute of Technology
This work was first-authored by Roger Grosse, in collaboration with Ruslan Salakhutdinov, Josh Tenenbaum, and myself.
Problem of Empirical Inference in Machine Learning and Philosophy of Science
Vladimir Vapnik, PhD, Columbia University and NEC-labs
Travel & Lodging
Our Location
The New York Academy of Sciences
7 World Trade Center
250 Greenwich Street, 40th floor
New York, NY 10007-2157
212.298.8600
Hotels Near 7 World Trade Center
Recommended partner hotel
Club Quarters, World Trade Center
140 Washington Street
New York, NY 10006
Phone: 212.577.1133
The New York Academy of Sciences is a member of the Club Quarters network, which offers significant savings on hotel reservations to member organizations. Located opposite Memorial Plaza on the south side of the World Trade Center, Club Quarters, World Trade Center is just a short walk to the Academy.
Use Club Quarters Reservation Password NYAS to reserve your discounted accommodations online.
Other nearby hotels
212.693.2001 | |
212.385.4900 | |
212.269.6400 | |
212.742.0003 | |
212.232.7700 | |
212.747.1500 | |
212.344.0800 |