About

I'm a Researcher in the AI for Science group at Microsoft Research Amsterdam, where I work on developing ML methods for PDE modeling, specifically for weather/climate. My research interests broadly include geometric deep learning, explainable ML, and AI for scientific/social impact.


Previously, I did my PhD in explainable ML at the University of Amsterdam. I was also a Research Fellow at the Partnership on AI. I completed my BSc and MSc, both in mathematics, at McMaster University in Hamilton, Ontario. I grew up in Canada but was born in the former Yugoslavia.

News

    • October 2022: I started my new role at Microsoft Research AI for Science in Amsterdam.
    • September 2022: I defended my PhD thesis! A PDF can be found here.
    • June 2022: Our paper on saliency map explanations for electrocardiograms was accepted to the ICML 2022 Workshop on Interpretable ML in Healthcare.
    • June 2022: Our paper on XAI Toolsheets was accepted to the IJCAI 2022 Workshop on XAI.
    • April 2022: Our FACT-AI course had 21 student papers accepted to the ML Reproducibility Challenge, including the Best Paper Award and 2 Outstanding Paper Awards. This work will also be published in ReScience.
    • April 2022: Our tutorial on reproducibility in information retrieval was accepted to SIGIR 2022. Website coming soon!
    • January 2022: Our paper on counterfactual explanations for GNNs was accepted to AISTATS 2022. Code is available here.
    • December 2021: Our tutorial on reproducibility in NLP was accepted to ACL 2022.
    • November 2021: Our paper on counterfactual explanations for tree ensembles was accepted to AAAI 2022. Code is available here.
    • November 2021: Our paper about the FACT-AI course at the University of Amsterdam has been accepted to the AAAI 2022 Symposium on Educational Advances in AI.
    • July 2021: Our paper on counterfactual explanations for tree ensembles was accepted to the ICML 2021 Workshop on Socially Responsible Machine Learning. Code is available here.
    • July 2021: Our paper on counterfactual explanations for GNNs was accepted to two workshops at ICML 2021: Human in the Loop Learning (HILL) and the Workshop on Algorithmic Recourse.
    • July 2021: Our paper on trust scores for regression predictions was accepted to the ICML 2021 Workshop on Human in the Loop Learning (HILL).
    • June 2021: Our paper on counterfactual explanations for GNNs was accepted to the KDD 2021 Workshop on Deep Learning on Graphs. Code is available here.
    • June 2021: Our paper on comparing correlations between XAI methods and attention-based mechanisms was accepted to the ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trends of XAI. This work is an extension of a student project from the FACT-AI course.
    • May 2021: I wrote a blog post for Papers with Code detailing how we incorporated the ML Reproducibility Challenge in our FACT-AI course.
    • April 2021: Our FACT-AI course had 9 student papers accepted to the ML Reproducibility Challenge. This work will also be published in ReScience.
    • March 2021: Our paper on multistakeholder evaluation of AI transparency mechanisms was accepted to the CHI 2021 Workshop on Human-Centered XAI.
    • January 2021: I joined the Partnership on AI as a research fellow, focusing on explainable machine learning.
    • April 2020: I gave a talk at Ranstad NL about fairness in machine learning.
    • March 2020: We have released a repository with implementations and analyses of existing FACT-AI algorithms from top AI venues, done by students during the FACT-AI course at the University of Amsterdam.
    • January 2020: Our paper on contrastive explanations for retail forecasting won the Best (CS) Student Paper Award at FAccT 2020! Slides and code are available.
    • January 2020: I helped create and teach a new course for graduate students on Fairness, Accountability, Confidentiality and Transparency (FACT) in AI at the University of Amsterdam. We wrote a blog post about the course, and my slides from my lecture on transparency are available.
    • August 2019: Our paper on contrastive explanations for retail forecasting was accepted to the IJCAI 2019 Workshop on XAI.
    • July 2019: Our paper on explaining predictions from tree-based boosting ensembles was accepted to the SIGIR 2019 Workshop on FACTS-IR.

Publications