Sparse Precision (Inverse Covariance) Estimation
The goal of spice is to provide classical statistical methods for estimating sparse precision (inverse covariance) matrix for functional connectivity analysis in brain networks, making these methods accessible and easy to use for researchers and practitioners in neuroimaging.
Methods
| Method | Reference |
|---|---|
Graphical lasso (method = "lasso") |
Friedman et al. (2008) |
Graphical ridge (method = "ridge") |
Wieringen and Peeters (2016) |
Graphical elastic net (method = "elnet") |
Zou and Hastie (2005) |
CLIME (method = "clime") |
Cai et al. (2011) |
TIGER (method = "tiger") |
Liu and Wang (2017) |
Graphical adaptive lasso (method = "adapt") |
Zou (2006); Fan et al. (2009) |
Arctangent type penalty (method = "atan") |
Wang and Zhu (2016) |
Exponential type penalty (method = "exp") |
Wang et al. (2018) |
MCP (method = "mcp") |
Zhang (2010) |
SCAD (method = "scad") |
Fan and Li (2001); Fan et al. (2009) |
Installation
You can install the development version of spice from GitHub with:
# install.packages("devtools")
devtools::install_github("Carol-seven/spice")Reference
Cai, Tony T., Weidong Liu, and Xi Luo. 2011. “A Constrained \(\ell\)1 Minimization Approach to Sparse Precision Matrix Estimation.” Journal of the American Statistical Association 106 (494): 594–607. https://doi.org/10.1198/jasa.2011.tm10155.
Fan, Jianqing, Yang Feng, and Yichao Wu. 2009. “Network Exploration via the Adaptive LASSO and SCAD Penalties.” The Annals of Applied Statistics 3 (2): 521–41. https://doi.org/10.1214/08-aoas215.
Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association 96 (456): 1348–60. https://doi.org/10.1198/016214501753382273.
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics 9 (3): 432–41. https://doi.org/10.1093/biostatistics/kxm045.
Liu, Han, and Lie Wang. 2017. “TIGER: A Tuning-Insensitive Approach for Optimally Estimating Gaussian Graphical Models.” Electronic Journal of Statistics 11 (1): 241–94. https://doi.org/10.1214/16-EJS1195.
Wang, Yanxin, Qibin Fan, and Li Zhu. 2018. “Variable Selection and Estimation Using a Continuous Approximation to the \(L_0\) Penalty.” Annals of the Institute of Statistical Mathematics 70 (1): 191–214. https://doi.org/10.1007/s10463-016-0588-3.
Wang, Yanxin, and Li Zhu. 2016. “Variable Selection and Parameter Estimation with the Atan Regularization Method.” Journal of Probability and Statistics 2016: 6495417. https://doi.org/10.1155/2016/6495417.
Wieringen, Wessel N. van, and Carel F. W. Peeters. 2016. “Ridge Estimation of Inverse Covariance Matrices from High-Dimensional Data.” Computational Statistics & Data Analysis 103: 284–303. https://doi.org/10.1016/j.csda.2016.05.012.
Zhang, Cun-Hui. 2010. “Nearly Unbiased Variable Selection Under Minimax Concave Penalty.” The Annals of Statistics 38 (2): 894–942. https://doi.org/10.1214/09-AOS729.
Zou, Hui. 2006. “The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association 101 (476): 1418–29. https://doi.org/10.1198/016214506000000735.
Zou, Hui, and Trevor Hastie. 2005. “Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society Series B: Statistical Methodology 67 (2): 301–20. https://doi.org/10.1111/j.1467-9868.2005.00527.x.
