CMSC 491/691: Robust Machine Learning

Instructor: Tejas Gokhale (OH: WED 1430--1530 or by appointment; ITE 214);
Teaching Assistant: TDB TBD;
Time: MON and WED 1600--1715
Location: Math&Pysch 106

Course Description | Schedule | Grading | Syllabus

Course description

Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures– this course will explore the reasons for these failures and state-of-the-art mitigation techniques.


Reference Books


Schedule is tentative and subject to change.

Topic Resources Papers
0/1 Introduction and ML Refresher [slides] -
2 Domain Adaptation [slides]
  • Ganin, Lempitsky. "Unsupervised Domain Adaptation by Backpropagation". ICML 2015. [pdf]
  • Tzeng, Hoffman, Saenko, Darrell. "Adversarial discriminative domain adaptation". CVPR 2017. [pdf]
  • Hoffman, Tzeng, Park, Zhu, Isola, Saenko, Efros, Darrell. "CyCADA: Cycle-Consistent Adversarial Domain Adaptation". ICML 2018. [pdf]
3 Domain Generalization [slides]
  • Volpi, Namkoong, Sener, Duchi, Murino, Savarese. "Generalizing to Unseen Domains via Adversarial Data Augmentation". Neurips 2018. [pdf]
  • Krueger, Caballero, Jacobsen, Zhang, Binas, Zhang, LePriol, Courville. "Out-of-Distribution Generalization via Risk Extrapolation". ICML 2021 [pdf]
  • Gulrajani, Lopez-Paz. "In Search of Lost Domain Generalization". ICLR 2021. [pdf]
4 OOD Detection [slides]
  • Hendrycks, Gimpel. "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks". ICLR 2017. [pdf]
  • Huang, Geng, Li. "On the Importance of Gradients for Detecting Distributional Shifts in the Wild". Neurips 2021. [pdf]
  • Yang, Wang, Zou, Zhou, Ding, Peng, Wang, Chen, Li, Sun, Du, Zhou, Zhang, Hendrycks, Li, Liu. "OpenOOD: Benchmarking Generalized Out-of-Distribution Detection". Neurips 2022. [pdf]
5 Adversarial Attacks, Backdoor Attacks [slides]
  • Goodfellow, Shlens, Szegedy. "Explaining and Harnessing Adversarial Examples". ICLR 2015. [pdf]
  • Madry, Makelov, Schmidt, Tsipras, Vladu. "Towards Deep Learning Models Resistant to Adversarial Attacks". ICLR 2018. [pdf]
  • Gu, Dolan-Gavitt, Garg. "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain". MLSec @ Neurips 2017. [pdf]
6 Uncertainty and Calibration [slides]
  • Guo, Pleiss, Sun, Weinberger. "On Calibration of Modern Neural Networks". ICML 2017. [pdf]
  • Angelopoulos, Bates, Malik, Jordan. "Uncertainty sets for image classifiers using conformal prediction". ICLR 2021. [pdf]
  • Thiagarajan, Anirudh, Narayanaswamy, Bremer. "Single Model Uncertainty Estimation via Stochastic Data Centering". Neurips 2022. [pdf]
7 Online/Continual Learning [slides]
  • Chen, Shrivastava, Gupta. "NEIL: Extracting Visual Knowledge from Web Data". ICCV 2013. [pdf]
  • Lopez-Paz, Ranzato. "Gradient Episodic Memory for Continual Learning". Neurips 2017. [pdf]
  • Farquhar, Gal. "Towards Robust Evaluations of Continual Learning". Preprint 2019. [pdf]
8 Unupervised/Self-Supervised/Contrastive Learning [slides]
  • Doersch, Gupta, Efros. "Unsupervised Visual Representation Learning by Context Prediction". ICCV 2015. [pdf]
  • Chen, Kornblith, Norouzi, Hinton. "A Simple Framework for Contrastive Learning of Visual Representations". ICML 2020. [pdf]
  • Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, Krueger, Sutskever. "Learning Transferable Visual Models From Natural Language Supervision". ICML 2021.[pdf]
9 Test-Time Learning, Adaptation [slides]
  • Sun, Wang, Liu, Miller, Efros, Hardt. "Test-Time Training with Self-Supervision for Generalization under Distribution Shifts". ICML 2019. [pdf]
  • Wang, Shelhamer, Liu, Olshausen, Darrell. "Tent: Fully Test-Time Adaptation by Entropy Minimization". ICLR 2021. [pdf]
  • Zhang, Levine, Finn. "MEMO: Test Time Robustness via Adaptation and Augmentation". Neurips 2022. [pdf]
10 Machine Unlearning, Model Editing [slides]
  • Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot. "Machine Unlearning". IEEE S&P 2021. [pdf]
  • Abadi, Chu, Goodfellow, McMahan, Mironov, Talwar, Zhang. "Deep Learning with Differential Privacy". CCS 2016. [pdf]
  • Sinitsin, Plokhotnyuk, Pyrkin, Popov, Babenko. "Editable Neural Networks". ICLR 2020. [pdf]
11 Interpretability, Explanability, Compositionality [slides]
  • Selvaraju, Cogswell, Das, Vedantam, Parikh, Batra. "Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization". ICCV 2017. [pdf]
  • Ribeiro, Singh, Guestrin. "'Why Should I Trust You?': Explaining the Predictions of Any Classifier". KDD 2016. [pdf]
  • Purushwalkam, Nickel, Gupta, Ranzato. "Task-Driven Modular Networks for Zero-Shot Compositional Learning". ICCV 2019. [pdf]
12 Logic, Semantics, and "Commonsense" [slides]
  • Gokhale, Banerjee, Baral, Yang. "VQA-LOL: Visual Question Answering under the Lens of Logic", ECCV 2020. [pdf]
  • Thrush, Jiang, Bartolo, Singh, Williams, Kiela, Ross. "Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality". CVPR 2022. [pdf]
  • Zellers, Bisk, Farhadi, Choi. "From Recognition to Cognition: Visual Commonsense Reasoning". CVPR 2019. [pdf]
13 Robustness Tradeoffs [slides]
  • Gokhale, Mishra, Luo, Sachdeva, Baral. "Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness". ACL 2022. [pdf]
  • Moayeri, Banihashem, Feizi. "Explicit Tradeoffs between Adversarial and Natural Distributional Robustness". Neurips 2022. [pdf]
  • Teney, Lin, Oh, Abbasnejad. "Id and ood performance are sometimes inversely correlated on real-world datasets". Neurips 2023. [pdf]
14 Invited Talk / "The Last Lecture" [slides]


Please consult syllabus for details. The class has a mix of PhD, MS, and BS students. We believe that anyone with the above prerequisites and a will to learn will do well. Work hard, engage and participate in class, learn how to read, write, and present research articles, be creative in your projects, and seek help when needed!


Projects will be judged on the basis of relative growth (from where you start to where you end). Grad projects should have an original and unique research hypothesis with a potential for publication. Undergrads students can also propose original and unique research hypothesis, but will be allowed to work on an idea provided by the instructors (i.e. you get to skip “ideation”) or innovative applications or combination of existing work.

Late Submissions

Academic Integrity

Please read UMBC's policy on Academic Integrity. I take academic integrity seriously. I hope that we will never have to deal with violations -- they are never pleasant for anyone involved. Please read the policies stated in the Syllabus .