ML@GT Blog

Welcome to all the new Machine Learning Faculty to Georgia Tech

2017-09-ML@GT-IAC.001We are pleased to have these new faculty join us starting this year.


Siva Maguluri CoE / ISyE
Devi Parikh CoC / IC
Jacob Abernathy CoC / SCS
Dhruv Batra CoC / IC
Rachel Cummings CoE / ISyE
Eva Dyer CoE / BME
Wenjing Liao CoS / Math
Thomas Ploetz CoC / IC
Tuo Zhao CoE / ISyE

They all just gave short talk today as part of the ML@GT Fall 2017 Symposium.  Welcome all.

Saturday, September 30, 2017 - 04:27



ML/Statistics Seminar by Shama Kakade on “Faster least squares and faster eigenvector computation: Improved algorithms for both optimization and statistics in the big data regime”

ML/Statistics Seminar Series

Date/Time: Thursday Sep 28 2017, 11:00 am – 12:00 pmLocation: Advisory Boardroom, #402 Groseclose
Speaker: Sham Kakade; Department of Statistics and Computer Science, University of Washington

Title: Faster least squares and faster eigenvector computation: Improved algorithms for both optimization and statistics in the big data regime

cs_e-1212_res

Abstract: Least squares and eigenvector computations are the two workhorse tools in both computer science and statistics. In the case of finding general purpose linear system solvers and for finding general purpose algorithms for eigenvector computation, there have been relatively few advancements in recent years. However, in most large-scale (statistical) applications, we often have many sampled points (say in form of (x,y) pairs for regression or in the form of a set of vectors for covariance estimation). The question is how can we speed up our algorithms to achieve faster runtimes applicable to this regime. We discuss recent advancements in this area, providing a provably efficient accelerated stochastic gradient algorithm for faster least squares regression and a provably efficient streaming (Oja’s) algorithm for faster eigenvector computations. We hope these characterizations provide insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex problems. We will also provide some preliminary experimental results extending these algorithms to the non-convex setting.

We hope these characterizations provide insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex problems. We will also provide some preliminary experimental results extending these algorithms to the non-convex setting.

: Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in both the Computer Science & Engineering and Statistics departments at the University of Washington. He completed his Ph.D. at the Gatsby Computational Neuroscience Unit at University College London, advised by Peter Dayan, and earned his B.S. in physics at Caltech. Before joining the University of Washington, Dr. Kakade was a principal research scientist at Microsoft Research, New England. Prior to this, Dr. Kakade was an associate professor at the Department of Statistics, Wharton, University of Pennsylvania (2010-2012) and an assistant professor at the Toyota Technological Institute at Chicago (2005-2009). Dr. Kakade completed a postdoc in the Computer and Information Science Department at the University of Pennsylvania under the supervision of Michael Kearns.

Friday, September 29, 2017 - 03:49