Nirupam Gupta

C.V.
Google Scholar
LinkedIn

People
Ongoing Projects
Publications
Book

निरुपम गुप्ता / Nirupam Gupta

I am a Tenure-Track Assistant Professor in the ML Section of the Department of Computer Science at University of Copenhagen (DIKU). Before joining DIKU, I was a Postdoctoral Researcher in the School of Computer Science at the EPFL (Switzerland) and the Department of Computer Science at Georgetown University (USA). I obtained my PhD from University of Maryland College Park (USA) and my Bachelor's degree from Indian Institute of Technology (IIT) Delhi (India).

Research. I work primarily on distributed machine learning (a.k.a. federated learning) algorithms, focusing on the challenges of robustness and privacy. To learn about the basics of robustness in the context of distributed learning, check out my latest book Robust Machine Learning: Distributed Methods for Safe AI. I have several ongoing projects on these topics. Some of these projects are highlighted here below.

Teaching. I teach Privacy in Machine Learning (PriMaL) and Machine Learning B (MLB) at DIKU. PriMaL is offered in the Fall semester, while MLB is offered in the Spring semester. Both these courses are offered in a hybrid format, and support fully remote participation. For more details, such as course registration, check out this website for ML courses at DIKU.

!!Open PhD call!! I am currently looking for a motivated PhD student to work in collaboration with me, Rafael Pinot and Aurélien Bellet on the topic of robustness in machine learning. Details of the PhD call can be found here. The position will tentatively start in April 2026, as a research internship and continue as a PhD position from September 2026. Please contact us (details in the call) if you are interested.

"Nothing is harder, yet nothing is more necessary, than to speak of certain things whose existence is neither demonstrable nor probable. ..."
- Hermann Hesse (The Glass Bead Game)


Ongoing Projects

I'm looking for motivated PhD students and Postdocs, to work with me on the following topics. If you are interested in any of these projects, please feel free to reach out to me (nigu[at]di.ku.dk).

  1. Robust and Private Federated Learning over Heterogeneous Data. While traditional federated learning approaches focus on model accuracy, this project aims to enhance both robustness and privacy in federated settings, particularly when dealing with heterogeneous data distributions. Challenges include addressing robustness against data and model poisoning in presence of data heterogeneity, ensuring privacy constraints (such as differential privacy), and optimizing communication efficiency. This project will explore novel algorithms and frameworks to tackle these challenges effectively. Main focus of the project is theory and algorithm design. Applications include healthcare, finance, and other domains where data privacy is crucial.
    As part of this project, I have co-authored this book Robust Machine Learning: Distributed Methods for Safe AI that provides a pedagogical overview of robust machine learning. Additionally, I have contributed a chapter in this (open access) book Large Language Models in Cybersecurity, highlighting the difficulty for achieving provable robustness & privacy in LLMs.
    Call for collaboration: If you are interested in collaborating, please reach out! I am currently collaborating with researchers from EPFL, INRIA (Montpellier & Sofia-Antilpolis) and Sorbonne University on various aspects of this project.
  2. Robustness in Decentralized Learning over Sparse Communication Graphs. In a peer-to-peer (P2P) setting, this project investigates the robustness of decentralized learning algorithms when communication is limited to a sparse graph structure. Key challenges include ensuring reliable model updates despite potential adversarial nodes and optimizing the trade-off between communication efficiency and learning accuracy. Existing approaches on robustness often rely on either centralized architectures or strong assumptions about the communication graph, which may not be feasible in decentralized settings. Applications include distributed sensor networks, edge computing, and other scenarios such as drone networks where robustness against communication/node failures is essential.
  3. Robust Federated Inference. This project focuses on enhancing the robustness of federated inference mechanisms, ensuring reliable and accurate predictions even in the presence of adversarial clients or unreliable communication channels.
  4. Fairness in Robust Distributed Learning. This project aims to address fairness concerns in distributed learning settings, ensuring that models are not only robust but also equitable across different user groups and data distributions.
  5. Robustness in Large Language Models (LLMs). This project investigates the unique challenges of ensuring robustness in LLMs, particularly in the context of adversarial attacks and data poisoning.

Recent Publications (last 5 years)

A complete list of my publications can be found on my DBLP profile and Google Scholar profile. Here are some of my recent selected publications from the last 5 years, authors listd in alphabetical order:
  1. Adaptive Gradient Clipping for Robust Federated Learning. Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Ahmed Jellouli, Geovani Rizk, and John Stephan. International Conference on Learning Representations (ICLR), 2025 [Spotlight, acceptance rate of 5%].
  2. Revisiting Ensembling in One-Shot Federated Learning. Youssef Allouah, Akash Dhasade, Rachid Guerraoui, Nirupam Gupta, Anne-Marie Kermarrec, Rafael Pinot, Rafael Pires, Rishi Sharma. In the 38th Conference on Neural Information Processing Systems (NeurIPS), 2024.
  3. Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients. Youssef Allouah, Abdellah El Mrini, Rachid Guerraoui, Nirupam Gupta and Rafael Pinot. In the 38th Conference on Neural Information Processing Systems (NeurIPS), 2024.
  4. Tackling Byzantine Clients in Federated Learning. Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, and Sasha Voitovych. In the Proceedings of the 41st International Conference on Machine Learning (ICML), 2024.
  5. Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity. Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, and Geovani Rizk. In the 37th Conference on Neural Information Processing Systems (NeurIPS), 2023 [Spotlight, acceptance rate of 5%].
  6. On the Privacy-Robustness-Utility Trilemma in Distributed Learning. Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, and John Stephan. In the Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.
  7. Robust Collaborative Learning with Linear Gradient Overhead. Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, and John Stephan. In the Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.
  8. Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity. Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, and John Stephan. In the Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.
  9. Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums. Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, and John Stephan. In the Proceedings of the 39th International Conference on Machine Learning (ICML), 2022.
  10. Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, and John Stephan. The ACM Symposium on Principles of Distributed Computing (PODC), 2021.
  11. Approximate Byzantine Fault-Tolerance in Distributed Optimization. Shuo Liu, Nirupam Gupta, and Nitin H. Vaidya. The ACM Symposium on Principles of Distributed Computing (PODC), 2021.