Robust principal component analysis(PCA) is widely used in many applications, such as image processing, data mining and bioinformatics. The existing methods for solving the robust PCA are mostly based on nuclear norm minimization. Those methods simultaneously minimize all the singular values, and thus the rank cannot be well approximated in practice. We extend the idea of truncated nuclear norm regularization(TNNR) to the robust PCA and consider truncated nuclear norm minimization(TNNM) instead of nuclear norm minimization(NNM). This method only minimizes the smallest N-r singular values to preserve the low-rank components, where N is the number of singular values and r is the matrix rank. Moreover, we propose an effective way to determine r via the shrinkage operator. Then we develop an effective iterative algorithm based on the alternating direction method to solve this optimization problem. Experimental results demonstrate the efficiency and accuracy of the TNNM method. Moreover, this method is much more robust in terms of the rank of the reconstructed matrix and the sparsity of the error.
The l_(2,1)-norm regularization can efficiently recover group-sparse signals whose non-zero coefficients occur in a few groups. It is well known that the l_(2,1)-norm regularization based on the classic alternating direction method shows strong stability and robustness in many applications. However, the l_(2,1)-norm regularization requires more measurements. In order to recover groupsparse signals with a better sparsity-measurement tradeoff, the truncated l_(2,1)-norm regularization and reweighted l_(2,1)-norm regularization are proposed for the recovery of group-sparse signals based on the iterative support detection. The proposed algorithms are tested and compared with the l_(2,1)-norm model on a series of synthetic signals and the Shepp-Logan phantom. Experimental results demonstrate the performance of the proposed algorithms,especially at a low sample rate and high sparsity level.