Full program for Vector Endless Summer School on Fairness and Privacy, April 18, 2018.
Abstract
Learning models that generalize to novel data is the ultimate goal in machine learning. The adversarial examples phenomenon challenges the notion that the current instantiation of deep neural networks generalize as well as previously thought, or that they are aware of what they don’t know. I’ll introduce various threat models in the adversarial setting, demonstrate practical attacks, and explore limitations of using threat models to characterize robustness. I believe advances in privacy, interpret-ability, and generalization are complementary goals, and will summarize our recent work bridging these areas.