TY - GEN
T1 - Distributed Sparse Optimization Based on Minimax Concave and Consensus Promoting Penalties
T2 - 30th European Signal Processing Conference, EUSIPCO 2022
AU - Komuro, Kei
AU - Yukawa, Masahiro
AU - Cavalcante, Renato L.G.
N1 - Funding Information:
This work was supported by JST SICORP Grant Number JPMJSC20C6, Japan. The authors also acknowledge the financial support by the Federal Ministry of Education and Research of Germany (BMBF) under grant 01DR21009 and the program “Souverän. Digital. Vernetzt.” Joint project 6G-RIC, project identification number: 16KISK020K. A full version of this work is given in [1].
Publisher Copyright:
© 2022 European Signal Processing Conference, EUSIPCO. All rights reserved.
PY - 2022
Y1 - 2022
N2 - We propose a distributed optimization framework to generate accurate sparse estimates while allowing an algorithmic solution with guaranteed convergence to a global minimizer. To this end, the proposed problem formulation involves the minimax concave penalty together with an additional penalty called consensus promoting penalty (CPP) that induces convexity to the resulting optimization problem. This problem is solved with an exact first-order proximal gradient algorithm, which employs a pair of proximity operators and is referred to as the distributed proximal and debiasing-gradient (DPD) method. Numerical examples show that CPP not only convexifies the whole cost function, but it also accelerates the convergence speed with respect to the system mismatch.
AB - We propose a distributed optimization framework to generate accurate sparse estimates while allowing an algorithmic solution with guaranteed convergence to a global minimizer. To this end, the proposed problem formulation involves the minimax concave penalty together with an additional penalty called consensus promoting penalty (CPP) that induces convexity to the resulting optimization problem. This problem is solved with an exact first-order proximal gradient algorithm, which employs a pair of proximity operators and is referred to as the distributed proximal and debiasing-gradient (DPD) method. Numerical examples show that CPP not only convexifies the whole cost function, but it also accelerates the convergence speed with respect to the system mismatch.
KW - distributed optimization
KW - Moreau envelope
KW - nonconvex penalty
KW - proximity operator
KW - sparseness
UR - http://www.scopus.com/inward/record.url?scp=85141011645&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141011645&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85141011645
T3 - European Signal Processing Conference
SP - 1841
EP - 1845
BT - 30th European Signal Processing Conference, EUSIPCO 2022 - Proceedings
PB - European Signal Processing Conference, EUSIPCO
Y2 - 29 August 2022 through 2 September 2022
ER -