Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework

Towfiq Rahman, Zhihua Qu, Toru Namerikawa

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


In this paper, the Alternating Direction Method of Multipliers (ADMM) is investigated for distributed optimization problems in a networked multi-agent system. In particular, a new adaptive-gain ADMM algorithm is derived in a closed form and under the standard convex property in order to greatly speed up convergence of ADMM-based distributed optimization. Using Lyapunov direct approach, the proposed solution embeds control gains into weighted network matrix among the agents and uses those weights as adaptive penalty gains in the augmented Lagrangian. It is shown that the proposed closed loop gain adaptation scheme significantly improves the convergence time of underlying ADMM optimization. Convergence analysis is provided and simulation results are included to demonstrate the effectiveness of the proposed scheme.

Original languageEnglish
Article number9075189
Pages (from-to)80480-80489
Number of pages10
JournalIEEE Access
Publication statusPublished - 2020


  • ADMM
  • Distributed optimization
  • Lyapunov direct method
  • gain adaptation
  • rate of convergence

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)


Dive into the research topics of 'Improving Rate of Convergence via Gain Adaptation in Multi-Agent Distributed ADMM Framework'. Together they form a unique fingerprint.

Cite this