TY - JOUR
T1 - A positive-definiteness-assured block Gibbs sampler for Bayesian graphical models with shrinkage priors
AU - Oya, Sakae
AU - Nakatsuma, Teruo
N1 - Funding Information:
This research is supported by JSPS Grants-in-Aid for Scientific Research Grant Number 19K01592 and the Keio Economic Society.
Publisher Copyright:
© 2022, The Author(s) under exclusive licence to Japanese Federation of Statistical Science Associations.
PY - 2022
Y1 - 2022
N2 - Although the block Gibbs sampler for the Bayesian graphical LASSO proposed by Wang (2012) has been widely applied and extended to various shrinkage priors in recent years, it has a less noticeable but possibly severe disadvantage that the positive definiteness of a precision matrix in the Gaussian graphical model is not guaranteed in each cycle of the Gibbs sampler. Specifically, if the dimension of the precision matrix exceeds the sample size, the positive definiteness of the precision matrix will be barely satisfied and the Gibbs sampler will almost surely fail. In this paper, we propose modifying the original block Gibbs sampler so that the precision matrix never fails to be positive definite by sampling it exactly from the domain of the positive definiteness. As we have shown in the Monte Carlo experiments, this modification not only stabilizes the sampling procedure but also significantly improves the performance of the parameter estimation and graphical structure learning. We also apply our proposed algorithm to a graphical model of the monthly return data in which the number of stocks exceeds the sample period, demonstrating its stability and scalability.
AB - Although the block Gibbs sampler for the Bayesian graphical LASSO proposed by Wang (2012) has been widely applied and extended to various shrinkage priors in recent years, it has a less noticeable but possibly severe disadvantage that the positive definiteness of a precision matrix in the Gaussian graphical model is not guaranteed in each cycle of the Gibbs sampler. Specifically, if the dimension of the precision matrix exceeds the sample size, the positive definiteness of the precision matrix will be barely satisfied and the Gibbs sampler will almost surely fail. In this paper, we propose modifying the original block Gibbs sampler so that the precision matrix never fails to be positive definite by sampling it exactly from the domain of the positive definiteness. As we have shown in the Monte Carlo experiments, this modification not only stabilizes the sampling procedure but also significantly improves the performance of the parameter estimation and graphical structure learning. We also apply our proposed algorithm to a graphical model of the monthly return data in which the number of stocks exceeds the sample period, demonstrating its stability and scalability.
KW - Gibbs sampler
KW - Graphical model
KW - Hit-and-run algorithm
KW - Positive definiteness
KW - Precision matrix
UR - http://www.scopus.com/inward/record.url?scp=85125522570&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125522570&partnerID=8YFLogxK
U2 - 10.1007/s42081-022-00147-1
DO - 10.1007/s42081-022-00147-1
M3 - Article
AN - SCOPUS:85125522570
SN - 2520-8764
JO - Japanese Journal of Statistics and Data Science
JF - Japanese Journal of Statistics and Data Science
ER -