Jul 18, 2022
Georgia Tech researchers have new published research this week at the International Conference on Machine Learning (ICML), one of the leading international academic conferences in machine learning, the field of computer science that gives computer systems the ability to learn from data. ICML 2022 runs through Saturday in Baltimore, Maryland.
The research venue is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.
Georgia Tech researchers are featured throughout the technical program for new contributions to ML methods and applications. There are 15 papers from Tech in the main program and workshops.
ICML’s top research tier – oral papers – includes two works from Georgia Tech’s H. Milton Stewart School of Industrial and Systems Engineering.
Tech researchers are presenting in the following sessions:
- Adaptive Experimental Design and Active Learning in the Real World (ReALML)
- Beyond Bayes: Paths Towards Universal Reasoning Systems
- Deep Learning: SSL/GNN
- Deep Learning/Optimization
- Deep Learning: Theory
- New Frontiers in Adversarial Machine Learning
- Optimization: Convex
- PM: Monte Carlo and Sampling Methods
- PM: Variational Inference/Bayesian Models and Methods
- Stable Conformal Prediction Sets
- T: Online Learning and Bandits
- Theory/Social Aspects
- Topology, Algebra, and Geometry in Machine Learning
Details about the ICML research from Georgia Tech are at the links below. To learn more about the Machine Learning Center at Georgia Tech visit https://ml.gatech.edu.
Georgia Tech at ICML 2022
ORALS
Stable Conformal Prediction Sets
MISC: General Machine Learning Techniques
Eugene Ndiaye
Theory/Social Aspects
Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling
Sajad Khodadadian, Pranay Sharma, Gauri Joshi, Siva Theja Maguluri
PAPERS
Deep Learning/Optimization
NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks
Mustafa Burak Gurbuz, Constantine Dovrolis
SPOTLIGHTS
Applications
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, Tuo Zhao
Deep Learning: SSL/GNN
Variational Wasserstein gradient flow
Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, Yongxin Chen
DL: Theory
Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint
Hao Liu, Minshuo Chen, Siawpeng Er, Wenjing Liao, Tong Zhang, Tuo Zhao
Optimization: Convex
Active Sampling for Min-Max Fairness
Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang
PM: Monte Carlo and Sampling Methods
Hessian-Free High-Resolution Nesterov Acceleration For Sampling
Ruilin Li, Hongyuan Zha, Molei Tao
PM: Variational Inference/Bayesian Models and Methods
Variational Sparse Coding with Learned Thresholding
Kion Fallah, Christopher J. Rozell
T: Online Learning and Bandits
Universal and data-adaptive algorithms for model selection in linear contextual bandits
Vidya Muthukumar, Akshay Krishnamurthy
Theory
ActiveHedge: Hedge meets Active Learning
Bhuvesh Kumar, Jacob Abernethy, Venkatesh Saligrama
WORKSHOPS
Adaptive Experimental Design and Active Learning in the Real World (ReALML) workshop
DECAL: DEployable Clinical Active Learning
Y. Logan, M. Prabhushankar and G. AlRegib
Beyond Bayes: Paths Towards Universal Reasoning Systems
Explanatory Paradigms in Neural Networks
Ghassan AlRegib, Mohit Prabhushankar
New Frontiers in Adversarial Machine Learning
Gradient-Based Adversarial and Out-of-Distribution Detection
Jinsol Lee, Mohit Prabhushankar, Ghassan AlRegib
Topology, Algebra, and Geometry in Machine Learning (Workshop)
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Aishwarya Balwani, Jakob Krzyston