Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields

Tatsuya Koyakumaru, Masahiro Yukawa, Eduardo Pavez, Antonio Ortega

研究成果: Article査読

1 被引用数 (Scopus)

抄録

This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the `1 norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the `1 norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than `1 for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau’s decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

本文言語English
ページ(範囲)23-34
ページ数12
ジャーナルIEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
E106A
1
DOI
出版ステータスPublished - 2023 1月

ASJC Scopus subject areas

  • 信号処理
  • コンピュータ グラフィックスおよびコンピュータ支援設計
  • 電子工学および電気工学
  • 応用数学

フィンガープリント

「Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル