Hugo Future Imperfect Slim

For self-supervised learning, Rationality implies generalization, provably Under Review
Yamini Bansal*, Gal Kaplun*, Boaz Barak
Blog       Talk

Distributional Generalization: A New Kind of Generalization Under Review
Preetum Nakkiran*, Yamini Bansal*
Talk

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling Under Review
Akash Srivastava*, Yamini Bansal*, Yukun Ding*, Cole Hurwitz*, Kai Xu, Prasanna Sattigeri, Bernhard Egger, David D. Cox, Josh Tenenbaum, Dan Gutfreund

Initialization trades off between training speed and generalization in deep neural networks Manuscript
Yamini Bansal, Madhu Advani, David Cox, Andrew Saxe

Deep Double Descent: Where Bigger Models and More Data Hurt ICLR 2020
Preetum Nakkiran, Gal Kaplun*, Yamini Bansal*, Tristan Yang, Boaz Barak, Ilya Sutskever
Blog       Shorter Blog

Minnorm training: an algorithm for training over-parameterized deep neural networks Manuscript
Yamini Bansal, Madhu Advani, David Cox, Andrew Saxe

On the information bottleneck theory of deep learning ICLR 2018
Andrew Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Tracey, David Cox


* co-authorship