Hello! Thanks for dropping by. I am a postdoc in the Distributed Computing Laboratory (DCL), part of the Computer Science Department at EPFL (Lausanne, Switzerland), supervised by Prof. Rachid Guerraoui. Previously, I was a postdoc in the Computer Science Department at Georgetown University (Washington DC, USA), sponsored by Prof. Nitin H. Vaidya. I obtained my PhD from the University of Maryland - College Park (Maryland, USA), in the Department of Mechanical Engineering. My PhD dissertation was on privacy in distributed multi-agent collaboration: consensus and optimization, supervised by Prof. Nikhil Chopra.
Research Work: Broadly, I am interested in design and analysis of algorithms for distributed optimization and machine learning. My recent research work focuses on Byzantine fault-tolerance (or robustness to adversarial nodes) in distributed (including decentralized) machine learning. Some of the challenges in this topic that I am currently pursuing are data heterogeneity, (differential) privacy constraints, and sparse communication topology. In the past, I have also worked on pre-conditioning of the gradient-descent method, applied to the distributed optimization framework. During my PhD, my work primarily focused on the issues of robustness (to adversarial actors) and privacy in distributed optimization and average consensus. For an updated list of my publications, please visit my DBLP profile.
Teaching: I have given guest lectures on the topic of byzantine robustness in distributed machine learning in the distributed algorithms course, taught by Prof. Rachid Guerraoui at EPFL. During my postdoc at Georgetown University, I offered a seminar course on the topic of distributed machine learning.