Distributed primal–dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints


Correspondence to: Shengyuan Xu, School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu, China.

E-mail: syxu02@yahoo.com.cn


We consider the multi-agent optimization problem where multiple agents try to cooperatively optimize the sum of their local convex objective functions, subject to global inequality constraints and a convex constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the associated Lagrangian function, which can be evaluated with stochastic errors, we propose the distributed primal–dual stochastic subgradient algorithms for two cases: (i) the time model is synchronous and (ii) the time model is asynchronous. In the first case, we obtain bounds on the convergence properties of the algorithm for a diminishing step size. In the second case, for a constant step size, we establish some error bounds on the algorithm's performance. In particular, we prove that the error bounds scale as math formula in the number of n agents. Copyright © 2012 John Wiley & Sons, Ltd.