University of Guelph
I am to understand the principles of deep neural network generalization and robustness to distribution shift, so that models may be deployed with relevant performance guarantees. I’m particularly interested in the application of information theory to deep learning in the service of Engineering standards and best practices.
My research interests are motivated by the “adversarial examples” phenomenon, whereby models can be fooled by seemingly unimportant input manipulations. This phenomenon poses a challenge to traditional interpretations of features extracted by deep neural networks, and limits their use in the sciences and performance-critical settings.
Toward trustworthy machine learning and artificial intelligence, I believe that more systematic ethical oversight is required to maintain the privilege of self-regulation, trust in the field, and ultimately that the public interest remains at the forefront of our research.
|Apr 2, 2020||Presented Image Analysis Using Artificial Intelligence to Quantify the Number and Density of Mussels in Lake Erie and Ontario to Environment Canada, Water Science & Technology Directorate researchers and staff.|
|Nov 14, 2019||Led breakout sessions on robustness to distribution shift and information-theoretic approaches to deep learning at the Pan-Canadian Self-Organizing Conference on Machine Learning.|
|Nov 14, 2019||Presented work on batch normalization and model robustness at the Toronto Machine Learning Summit.|
|Sep 3, 2019||Received a top 400 reviewer award for NeurIPS 2019.|
|May 22, 2019||New work Batch Normalization is a Cause of Adversarial Vulnerability in the Workshop on Identifying and Understanding Deep Learning Phenomena at ICML 2019.|