Support The World's Smartest Network
×

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

DONATE
This site uses cookies.
Learn more.

×

This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.

Machine Learning Symposium

FREE

for Members

Machine Learning Symposium

Friday, October 27, 2006

The New York Academy of Sciences

Presented By

 

Join us for the Launch Event for the Machine Learning Group

Steering Committee

  • Corinna Cortes, PhD, Google, Inc.
  • Haym Hirsh, PhD, Rutgers University
  • Tony Jebara, PhD, Columbia University
  • Michael L. Littman, PhD, Rutgers University
  • Mehryar Mohri, PhD, Courant Institute of Mathematical Sciences
  • David Waltz, PhD, Columbia University

This is the first symposium on Machine Learning at the New York Academy of Sciences. The New York Academy of Sciences launched a new initiative in the physical sciences and engineering areas in 2006 and the Machine Learning Symposium is a key event.

The primary goal of this symposium is to build a community of scientists in machine learning from the NYC area's academic, government, and industrial institutions by convening and promoting the exchange of ideas in a neutral setting.

Program

9:30 am
Poster set-up & Continental Breakfast

10:00 am
Opening Remarks

10:15 am
Manfred Warmuth, University of California, Santa Cruz

11:00 am
Lise Getoor, University of Maryland

11:45 am
Spotlight Graduate Student Talks

12:00 pm
Poster Session & Luncheon

1:30 pm
Robert Schapire, Princeton University

2:15 pm
Michael Kearns, University of Pennsylvania

3:00 pm
Vladimir Vapnik, NEC-Columbia

3:45 pm
Student Award Winner Announcement & Closing Remarks

We gratefully acknowledge the support of Google, Inc. for the Student Poster Awards.

Abstracts

Leaving the Span
Manfred K. Warmuth, University of California, Santa Cruz

When linear models are too simple then the following "kernel trick'' is commonly used: Expand the instances into a high-dimensional feature space and use any algorithm whose linear weight vector in feature space is a linear combination of the expanded instances. Linear models in feature space are typically non-linear in the original space and seemingly more powerful. Also dot products can still be computed efficiently via the use of a kernel function. However we discuss a simple sparse linear problem that is hard to learn with any algorithm that uses a linear combination of the embedded training instances as its weight vector, no matter what embedding is used. We show that these algorithms are inherently limited by the fact that after seeing k instances only a weight space of dimension k can be spanned. Surprisingly the same problem can be efficiently learned using the exponentiated gradient (EG) algorithm: Now the component-wise logarithms of the weights are essentially a linear combination of the training instances. This algorithm enforces "additional constraints'' on the weights (all must be non-negative and sum to one) and in some cases these constraints alone force the rank of the weight space to grow as fast as 2^k. (Joint work with S.V.N. Vishwanathan)

SRL: Statistical Relational Learning
Lise Getoor, University of Maryland

A key challenge for machine learning is mining richly structured datasets describing objects, their properties, and links among the objects. We'd like to be able to learn models which can capture both the underlying uncertainty and the logical relationships in the domain. Links among the objects may demonstrate certain patterns, which can be helpful for many practical inference tasks and are usually hard to capture with traditional statistical models. Recently there has been a surge of interest in this area, fueled largely by interest