Some of my recent research work -
Byzantine Fault-Tolerance in (High-Dimensional) Peer-to-Peer Distributed Gradient-Descent (arXiv, Feb'21),
with Nitin Vaidya.
In this paper, we propose the first-ever mechanism to confer provable (Byzantine) fault-tolerance to the peer-to-peer (P2P) distributed gradient-descent algorithm. In contrast to prior work, our algorithm is applicabile to higher-dimensions, and does not rely on Byzantine broadcast. Unlike the more commonly assumed server-based system architecture, we consider the P2P architecture wherein the nodes may only interact amongst themselves, do not have access to a trustworthy server. Ensuring fault-tolerance in a P2P setting is more challenging than in a server-based setting. Common applications of our algorithm include P2P distributed - machine learning, hypothesis testing, and state estimation.
Approximate Byzantine Fault-Tolerance in Distributed Optimization (arXiv, Jan'21),
with Shuo Liu, and Nitin Vaidya.
An important extension to our prior work on exact Byzantine fault-tolerance, which appeared in the proceedings of PODC'20. In this particular paper, as the name suggests, we study the problem of approximate Byzantine fault-tolerance - a generalization of exact fault-tolerance. The results presented here have a much wider applicability, as opposed to the results on exact fault-tolerance. We present generic fault-tolerance properties that can be applied directly to contemporary real-world distributed optimization problems; such as federated learning, multi-sensor networks, and network resource allocation. (Utility of our results to the distributed homogeneous machine learning case is demonstrated here.)
Byzantine Fault-Tolerance in Decentralized Optimization under 2f-Redundancy (arXiv, Sept'20),
with Thinh T. Doan, and Nitin Vaidya.
In this work, we extend our prior results on Byzantine fault-tolerant distributed optimization for server-based system artchitecture to the decentralized peer-to-peer system architecture. This paper presents the first ever decentralized optimization algorithm with provable exact Byzantine fault-tolerance for high-dimensional optimization problems. The paper has been accepted for the proceedings of the 2021 American Control Conference.
Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem (arXiv, Aug'20),
with Kushal Chakraborty, and Nikhil Chopra.
In this work, we propose the first ever distributed iterative optimization method with superlinear rate of convergence. Specifically, we show that a traditional gradient-descent method when coupled with an iterative pre-conditioner matrix can achieve superlinear convergence rate - unequivocally superior to state-of-the-art accelerated methods; namely Nesterov's accelerated method, heavy-ball method, and the quasi-Newton method called BFGS. (Variants of this work, showing improved robustness to system noise, have been published in the proceedings of the 2020 American Control Conference, and the IEEE Control Systems Letters - 2021.)
Preserving Statistical Privacy in Distributed Optimization (arXiv, Aug'20), with Shripad Gade, Nikhil Chopra, and Nitin H. Vaidya.
We present a distributed optimization protocol that preserves statistical privacy of agents' local cost functions against a passive adversary that corrupts some agents in a peer-to-peer network without affecting the correctness of the solution, unlike the more widely used and popular differential privacy protocols. The work has been published in the IEEE Control Systems Letters, 2021.
Some highlights of my education -
Besides literacy in three languages, namely Hindi (native), English, and Gujarati, I have also managed to obtain a couple of academic degrees: