University of Guelph

Vector Institute

Ontario, Canada

My research aims to identify properties of deep neural networks (DNNs) that confer generalization and robustness to distribution shift. I believe that understanding the information-theoretic limitations of Deep Learning is a key step to this end, and may help establish predictive performance guarantees for safety-critical settings.

My research interests are motivated by the “adversarial examples” phenomenon, whereby artificial intelligence (AI) models may be fooled by input manipulations that are deemed by many humans to be imperceptible or irrelevant. Adversarial examples highlight confirmation biases associated with common characterizations of representation learning in DNNs. For example the contribution of visually salient natural image features to categorical predictions may be over-estimated.

As part of my broader interest in trustworthy AI systems, I believe that more systematic ethical oversight of AI research is required to maintain trust in the field, and to ensure that respect for (often digitized) persons remains at the forefront. Beyond my primary research objectives, I maintain a keen passion for applied AI projects that have the potential for positive social and/or environmental impacts.