|Date:||November 06, 2013 from 1:00 pm to 2:00 pm EST|
|Location:||303 Mudd Building
500 West 120th Street
|Contact:||For further information regarding this event, please contact Jonathan Stark by sending email to email@example.com or by calling 212-854-6370.|
|Info:||Click Here to Visit Website.|
As the big data paradigm is gaining momentum, learning algorithms trained through fast stochastic gradient descent methods are becoming the de-facto standard in the industry world. Still, even these simple procedures cannot be used completely "off-the-shelf" because parameters, e.g. the learning rate, has to be properly tuned to the particular problem to achieve fast convergence.
The online learning framework is a powerful tool to design fast learning algorithms able to work in both the stochastic and adversarial setting. In this talk I will introduce new advancements in the time-varying regularization framework for online learning, that allows to derive almost parameter-free adaptive algorithms. In particular, I will focus on a new algorithm based on a dimension-free exponentiated gradient. Contrary to the existing online algorithms, it achieves an optimal regret bound, up to logarithmic terms, without any parameter nor any prior knowledge about the optimal solution.
Francesco Orabona is a Research Assistant Professor at the Toyota Technological Institute at Chicago. His research interests are in the areas of online learning, active learning, and transfer learning. He received the PhD in Electrical Engineering at the University of Genoa. Before joining TTIC he did apost-doc at Idiap Research Institute, Switzerland, with Prof. Barbara Caputo and one at the University of Milan, Italy, with Prof. Nicolo'Cesa-Bianchi.