Training

Deep Learning at Scale Tutorial

Run since SC18 in collaboration with Cray, Intel (in previous years), NVIDIA, and OLCF (last year). In 2021 we held our first training event powered by Perlmutter with hands-on material for optimized distributed training at large scale on GPUs.

Deep Learning for Science School 2020 (Webinar Series)

A Shallow Introduction to Deep Learning with PyTorch - Evann Courdier (Idiap, EPFL)

A Modern Guide to Hyperparameter Optimization - Richard Liaw (AnyScale, UC Berkeley)

Deep Generative Models - Aditya Grover (Stanford University)

Reproducibility in Deep Learning - Koustuv Sinha (McGill University)

Uncertainty and Out-of-Distribution Robustness in Deep Learning - Balaji Lakshminarayanan, Dustin Tran and Jasper Snoek (Google Brain)

How to Evaluate Efficient Deep Neural Network Approaches - Vivienne Sze (MIT)

Symmetry and Equivariance in Neural Networks - Tess Smidt (Berkeley Lab)

Distributed Large Batch Training - Swetha Mandava (NVIDIA)

Attention & Language - Rami Al-Rfou (Google Research)

Hidden Physics Models - Maziar Raissi (University of Colorado Boulder)

Qualitative Choices in Representations for Molecules, Materials, and Surfaces - Zachary Ulissi (Carnegie Mellon University)

Deep Learning for Science School 2019