University of Guelph
My research aims to identify properties of deep neural networks (DNNs) that confer generalization and robustness to distribution shift. I believe that understanding the information-theoretic limitations of Deep Learning is a key step to this end, and may help establish predictive performance guarantees for safety-critical settings.
My research interests are motivated by the “adversarial examples” phenomenon, whereby artificial intelligence (AI) models may be fooled by input manipulations that are deemed by many humans to be imperceptible or irrelevant. Adversarial examples highlight confirmation biases associated with common characterizations of representation learning in DNNs. For example the contribution of visually salient natural image features to categorical predictions may be over-estimated.
As part of my broader interest in trustworthy AI systems, I believe that more systematic ethical oversight of AI research is required to maintain trust in the field, and to ensure that respect for (often digitized) persons remains at the forefront. Beyond my primary research objectives, I maintain a keen passion for applied AI projects that have the potential for positive social and/or environmental impacts.
|Mar 7, 2022||New article Predicting dreissenid mussel abundance in nearshore waters using underwater imagery and deep learning published in Limnology and Oceanography: Methods. To be presented at SOLE’22.|
|Dec 12, 2020||Program committee member at NeurIPS 2020 Workshop on Navigating the Broader Impacts of AI Research.|
|Apr 2, 2020||Presented Image Analysis Using Artificial Intelligence to Quantify the Number and Density of Mussels in Lake Erie and Ontario to Environment and Climate Change Canada (ECCC), Water Science & Technology Directorate.|
|Nov 14, 2019||Led breakout sessions on robustness to distribution shift and information-theoretic approaches to deep learning at the Pan-Canadian Self-Organizing Conference on Machine Learning.|