用户名: 密码: 验证码:
稀疏表示的分段匹配寻踪方法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来,在信号处理领域中,随着各种变换域处理技术(如傅立叶变换、小波变换、奇异值分解等)的发展,信号的稀疏表示理论逐渐成为备受关注的课题,并被广泛应用于信号去噪、数据压缩、特征提取、模式分类和盲源分离等领域。特别是在高维信息处理方面,稀疏表示理论直接催生了压缩感知理论,而压缩感知理论是信息论中自香农采样定理以来最具革新意义的理论成果。因此,稀疏表示理论的研究工作对信号处理理论的进一步发展具有重要的基础性作用。
     本文重点研究信号在过完备系统中的稀疏表示方法。给定一个由一组基本信号(原子)组成的字典和一个可以在中稀疏表示的输入信号(即可用中很少的几个原子的线性组合来表示),寻求在中的最稀疏表示,即是求解优化问题:min ? . .。被证明是NP难的,但近年来的研究成果表明,当足够稀疏,而的相干参数较小时,一些寻踪方法,如基寻踪、正交匹配寻踪,可以逼近或精确求得( )的解,这一性质被称为精确重构。现有的大多数寻踪方法由于受字典相干性的影响,重构能力十分有限:或者对的稀疏性要求较高,或者计算复杂度太高。为了改进这一不足,本文基于于分段匹配寻踪方法进行了如下几方面的研究:
     (1)研究了精确重构条件。现有的许多寻踪方法的性能之所以受字典相干性的影响较大,主要原因在于其理论基础是? /?等价条件,即优化问题:min ? . .的解等价于( )解的充要条件,而该条件一般以字典的相干参数为研究对象。为摆脱字典相干性的限制,第二章基于框架理论,给出了? /?等价条件,即优化问题:min ? . .的解等价于( )解的充要条件。由于的最小?范数解只与字典中各原子的线性相关性有关,因此,基于? /?等价条件的寻踪方法不受字典相干性的影响。如果分段匹配寻踪方法历次迭代选取的原子集Γ满足? /?等价条件,则可精确求解( ),这样的Γ被称为依信号伪框架的。
     (2)研究了现有的各种贪婪寻踪方法的原子选取策略。由于依信号伪框架的字典Γ的构造主要依赖于原子选取策略,因此原子选取策略的优劣直接影响了寻踪方法的性能。第三章首先分析了现有的各种纯贪婪寻踪方法的原子选取策略及它们之间的内在联系,得到了一个新的结论:在一定条件下,基寻踪问题可通过一系列转换,最终用匹配寻踪方法来求解;其次分析了现有的各种分段匹配寻踪方法的原子选取策略,并指出了这些寻踪方法对字典相干性的依赖较弱,但其阈值参数难以控制。
     (3)为克服分段匹配寻踪方法的阈值参数难以确定的困难,第四章针对孤立奇异性信号和多尺度字典,提出了分段脊寻踪方法,其核心思想是:提取孤立奇异性信号在小波变换域的脊,然后选取那些落在脊上的原子组成Γ,最后求输入信号在Γ中的正交投影。可以证明Γ是依信号伪框架的,由? /?等价条件可知分段脊寻踪的结果必然是最小?的;又由于分段脊寻踪的原子选取是基于小波脊算法的,其速度优于现有的寻踪方法,数值实验的结果也表明了这一点。
     (4)为进一步解决分段匹配寻踪方法针对于一般的信号和字典的阈值参数难以确定的问题,第五章提出了基于局部竞争的分段正交匹配寻踪方法,其核心思想是:字典中与信号的内积较大的若干个原子,以及与它们在空域或频域上邻近的原子,可能是同等重要的,每次迭代选取两个或多个由这些原子组成的字典模式,并通过局部竞争来选取最优模式,从而避免那些与信号内积较小的原子总是不能入选的情形;同时,为避免引入冗余的原子,采取了原子淘汰算法。基于局部竞争的分段正交匹配寻踪方法虽然一次迭代的计算开销较大,但其总的迭代次数和计算开销比一般的寻踪方法少,数值实验表明其保稀疏性、抗相干性和收敛速度都优于现有的方法。
     (5)由于各种分段匹配寻踪方法都是基于正交匹配寻踪的,因此正交匹配寻踪的求解方法值得关注。第六章回顾了现有的求解正交匹配寻踪的快速迭代方法,拓展了共轭梯度寻踪方法,并比较了各种方法单次迭代所需的计算资源。然而,当Γ欠秩时,现有的各种求解正交匹配寻踪的迭代方法不能用于求解分段匹配寻踪的子问题,而经典的求解欠秩系统的迭代方法,如SYMMLQ、MINRES、LSQR等,一般不具保稀疏性。如何快速迭代求解分段匹配寻踪的子问题是尚待解决的问题。
Recently, the sparse signal representation has attracted great attentions in signal processing, along with the development of various of the mathematical transform technologies, such as the Fourier Transform, the Wavelet Transform and the Singular Value Decomposition. Meanwhile, sparse representation theories have been widely applied in many research areas like noise reduction, data compression, feature extraction, pattern classification and blind source separation. Furthermore, sparse represenation directly motivated the compressed sensing theory for high-dimension information processing, which has been recognized as one of the greatest theoretical innovations in information theory since Shannon’s sampling theory was published in 1950s. Hence, researches on sparse representations play a fundmental role in the extension of signal processing theories.
     The emphasis of this thesis is laid on the sparse representation of signals in the underdetermined system. Given a dictionary of elementary signals, called atoms, and an input signal which can be modeled with the sparse representation, i.e. is a linear combination of only a few atoms, to obtain this sparse representation is equivalent to solve the optimization problem : min ? . . . has been proven NP-hard. However, recent researches show that some methods, such as Basis Pursuit (BP) and Orthogonal Matching Pursuit (OMP), can approximate or solve exactly, if is sufficiently sparse and the coherence parameter of is small enough. We call this property of these methods exact recovery. The exact recovery capability of most existing methods are influenced by the dictionary’s coherence, which leads to these methods’high requirement of the sparsity of or high computation complexity. In order to change this status, this dissertation offers the following contributions based on stagewise matching pursuits (SMP):
     (1) The first contribution concerns the exact recovery condition. The performance of many existing pursuit methods are limited by the dictionary’s coherence because they are based on the l_1 /l_0 equivalence, i.e., the necessary and sufficient conditions of the solution of the optimization problem : min . . , being equivalent to the one of ( ). In order to break the limitation of the dictionary’s coherence, Chapter 2 presents the l_2 /l_0 equivalence, i.e., the necessary and sufficient conditions of the solution of the optimization problem : min . . , being equivalent to the one of . Since the solution of only concerns the linear dependence of the atoms in , the performance of the methods based on l_2 /l_0 equivalence is free from the dictionary’s coherence. If the active set Γselected by SMP from satisfies ? /? equivalence, then SMP solve . We callΓis a pseudo-frame depending on the input signal.
     (2) We concern the atoms selection strategy (ASS) of the existing greedy pursuit methods. SinceΓis built according to the ASS, the quality of the ASS determines the performance of the pursuit methods. In Chapter 3, we first review the ASS of the existing pure greedy methods and their inherent connections and draw a conclusion that the basis pursuit problem can be solved by matching pursuit under certain circumstance. Then, we analyze the ASS of the SMP, and point out that the threshold value of the SMP is difficult to control although they have weak dependencies on the dictionary coherence.
     (3) In order to make it easy to select the threshold value of the SMP, we propose a new method, Stagewise Ridge Pursuit (StRP), for isolated singular signals and multi-scale dictionaries in Chapter 4. The main ideas are as follows: first, extract the ridge of the isolated singular signals in the wavelet transform domain; second, select the atoms located in the ridge line into the active setΓ; last, solve the orthogonal projection of the input signal inΓ. It’s easy to prove thatΓis a pseudo-frame depending on and ? /? equivalence is satisfied. Hence, the analysis results of StRP’s must be ? minimization. Numerical experiments also show the advantages of this approach.
     (4) In order to overcome the difficulty of selecting threshold value of the SMP for common signals and dictionaries, we propose another SMP method, Local-competition-based Orthogonal Matching Pursuit (LStOMP), in Chapter 5. The core idea is based on the assumption that the important atoms that have larger inner products with the input signal and their neighbors in the space or frequency domain might be equally important. We can make up two or more dictionary patterns by these atoms and select the optimal one based on the local competition mechanism, which can be used to avoid that the atoms having smaller inner products with the input signal could never be selected. Meanwhile, backward greedy algorithms are also adapted to eliminate the redundancy of the active set. LStOMP has less total iterative steps and lower total computation costs than many existing methods, although the computation costs of LStOMP is higher in one iteration step. Numerical experiments show that LStOMP has better sparsity-preserving, anti-coherence and convergence than existing methods.
     Since all of the SMP are based on Orthogonal Matching Pursuit (OMP), we concentrate on the iterative methods for solving OMP in Chapter 6. First, we review the existing fast iterative methods for solving OMP, extend and improve the Conjugate Gradient Pursuit, and compare the computation requirements they need in each iteration step. Second, we point out that the iterative methods for solving OMP cannot be used to solve the subproblems of the SMP ifΓis rank deficient. However, the classical iterative methods, such as SYMMLQ、MINRES、LSQR, which is often used to solve the rank deficient systems, are not sparsity-preserving. It’s suspended to solve the SMP’s subproblems with fast iteration.
引文
[1] Vinje W E, Gallant J L. Sparse coding and decorrelation in primary visual cortex during natural vision [J]. Science, 2000, 287 (5456): 1273-1276.
    [2] Mallat S, Zhang Z. Matching pursuit with time-frequency dictionary [J]. IEEE Transactions on Signal Processing, 1993, 41 (12): 3397–3415.
    [3] Chen S S, Donoho D L, Saunders M A. Atomic decomposition by basis pursuit [J]. SIAM Review, 2001, 20 (1): 33-61.
    [4] Tropp J A. Topics in Sparse Approximation [D]. University of Texas at Austin, 2004.
    [5] Natarajan B K. Sparse approximate solutions to linear systems [J]. SIAM Journal on Computing, 1995, 24: 227.
    [6] ISO/IEC10918-1. Digital compression and coding of continuous-tone still images (JPEG) [J]. CCITT Recommendation T, 1991, 81.
    [7] ISO/IEC11172. Information technology-coding of moving pictures and associated audio for digital storage media at up to about 1.5 mbit/s [J]. 1993.
    [8] ISO/IEC13818. Information technology-generic coding of moving pictures and associated audio information [J]. 1994.
    [9] Gabor D. Theory of communication [M]. Institution of Electrical Engineering, 1946.
    [10] Meyer Y, Salinger D. Wavelets and operators [M]. Cambridge Univ Pr, 1995.
    [11] Mallat S G. A theory for multiresolution signal decomposition: the wavelet representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11 (7): 674-693.
    [12] Daubechies I. Ten lectures on wavelets [M]. Philadelphia: Society for Industrial Mathematics, 1992.
    [13] ISO/IEC15444-1:2004. Information technology– JPEG 2000 image coding system: Core coding system [J]. 2004.
    [14] Do M N. Directional Multiresolution Image Representations [D]. Lausanne:Audio-Visual Communication Laboratory, Swiss Federal Institute of Technology, 2001.
    [15] Candes E J. Rigelets: Theory and Applications [D]. Department of Statistics, StanfordUniversity, 1998.
    [16] Candes E J, Donoho D L. Curvelets : A Surprisingly Effective Nonadaptive Representation For Objects with Edges [C]// Curve and Surface Fitting. Saint-Malo: Vanderbilt University Press, 1999:
    [17] Le Pennec E, Mallat S. Sparse geometric image representations with bandelets [J]. IEEE Transactions on Image Processing, 2005, 14 (4): 423-438.
    [18] Do M N, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation [J]. Image Processing, IEEE Transactions on, 2005, 14 (12): 2091-2106.
    [19]焦李成,谭山.图像的多尺度几何分析:回顾和展望[J].电子学报, 2003, 31 (12A): 1975~1981.
    [20]焦李成,孙强.多尺度变换域图像的感知与识别:进展和展望[J].计算机学报, 2006, 29 (2): 177~193.
    [21] Chen S S. Basis Pursuit [D]. Stanford, Department of statistics, Stanford university, 1995.
    [22] Murray J F, Kreutz-Delgado K. An improved FOCUSS-based learning algorithm for solving sparse linear inverse problems [C]// Conference record of the thirty-fifth asilomar conference on signals, systems and computers. IEEE, 2001: 347-354.
    [23] Temlyakov. Nonlinear Methods of Approximation [J]. Foundations of Computational Mathematics, 2002, 3 (1): 33-107.
    [24] DeVore R, Temlyakov V. Some remarks on greedy algorithms [J]. Advances in Computational Mathematics, 1996, 5 (1): 173-187.
    [25] Pati Y C, Rezaiifar R, Krishnaprasad P S. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition [C]// Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers. IEEE, 1993: 40-44.
    [26] Donoho D L, Tsaig Y, Drori I, et al. Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit [J/OL], 2006 [2009-3-22] http://www-stat.stanford.edu/~donoho/reports.html#.
    [27] Jaggi S, Karl W C, Mallat S, et al. High resolution pursuit for feature extraction [J].Applied and Computational Harmonic Analysis, 1998, 5 (4): 428-449.
    [28] Plumbley M. Recovery of sparse representations by polytope faces pursuit [C]// Proceedings of the 6th International Conference on Independent Component Analysis and Blind Source Separation. Charleston, SC, USA: [s.n.], 2006: 206-213.
    [29] Blumensath T, Davies M E. Gradient pursuits [J]. IEEE Transactions on Signal Processing, 2008, 56 (6): 2370-2382.
    [30] Reeves S J. An efficient implementation of the backward greedy algorithm for sparse signal reconstruction [J]. Signal Processing Letters, IEEE, 1999, 6 (10): 266-268.
    [31] Cotter S F, Kreutz-Delgado K, Rao B D. Efficient backward elimination algorithm for sparse signal representation using overcomplete dictionaries [J]. Signal Processing Letters, IEEE, 2002, 9 (5): 145-147.
    [32] Temlyakov V N. Weak greedy algorithms [J]. Adv. Comput. Math., 2000, 12: 213-227.
    [33] Blumensath T, Davies M E. Stagewise weak gradient pursuits. Part I: Fundamentals and numerical studies [J/OL] submitted for publication, 2008 [2009-3-22] http://www.see.ed.ac.uk/~tblumens/publications.html.
    [34] Blumensath T, Davies M E. Stagewise weak gradient pursuits. Part II: Theoretical properties [J/OL] submitted for publication, 2008 [2009-3-22] http://www.see.ed.ac.uk/~tblumens/publications.html.
    [35] Donoho D, Maleki A, Shahram M. Wavelab 8.50. http://www-stat.stanford.edu/software/wavelab/index.html.
    [36] Bacry E. LastWave. http://www.cmap.polytechnique.fr/~bacry/LastWave/.
    [37] Chen S S, Donoho D L, Saunders M A, et al. About atomizer (Tech. Rep.) [R]. 1995.
    [38] Stodden V, Tsaig Y, Drori I, et al. SparseLab 2.0. 2007. http://sparselab.stanford.edu/.
    [39] Krstulovic S, Gribonval R. MPTK: Matching pursuit made tractable [C]//. 2006:
    [40] Boyd S, Vandenberghe L. Convex Optimization [M]. Cambridge University Press, 2004.
    [41] Tibshirani R. Regression shrinkage and selection via the lasso [J]. Journal of the Royal Statistical Society, 1996, 58 (1): 267-288.
    [42] Efron B, Hastie T, Johnstone I, et al. Least angle regression [J]. ANNALS OF STATISTICS, 2004: 407-451.
    [43] Candes E, Tao T. The Dantzig selector: Statistical estimation when p is much larger than n [J]. ANNALS OF STATISTICS, 2007, 35 (6): 2313.
    [44] Candes E J, Romberg J K, Tao T. Stable signal recovery from incomplete and inaccurate measurements [J]. COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (8): 1207.
    [45] Guigue V, Rakotomamonjy A, Canu S. Kernel Basis Pursuit [M] // Machine Learning: ECML 2005. 2005:146-157.
    [46] Candes E J, Tao T. Decoding by linear programming [J]. Information Theory, IEEE Transactions on, 2005, 51 (12): 4203-4215.
    [47] Saunders M A, Kim B. PDCO: Primal-dual interior method for convex objectives. 2002. http://www. stanford. edu/group/SOL/software/pdco. html.
    [48] Figueiredo M A T, Nowak R D, Wright S J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems [J]. Selected Topics in Signal Processing, IEEE Journal of, 2007, 1 (4): 586-597.
    [49] Kim S J, Koh K, Lustig M, et al. A Interior-Point Method for Large-Scale l1-Regularized Least Squares [J]. IEEE Journal on Selected Topics in Signal Processing, 2007, 1 (4): 606–617.
    [50] Candes E, Romberg J. L1-magic. 2005. http://www.acm.caltech.edu/l1magic/.
    [51] Koh K, Kim S J, Boyd S. l1_ls. http://www.stanford.edu/~boyd/l1_ls/.
    [52] Figueiredo M, Nowak R D, Wright S J. Gradient Projection for Sparse Reconstruction (GPSR). http://www.lx.it.pt/~mtf/GPSR/.
    [53] Donoho D L, Huo X. Uncertainty principles and ideal atomic decomposition [J]. Information Theory, IEEE Transactions on, 2001, 47 (7): 2845-2862.
    [54] Donoho D L, Elad M. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization [J]. Proceedings of the National Academy of Sciences, 2003, 100 (5): 2197-2202.
    [55] Elad M, Bruckstein A M. A generalized uncertainty principle and sparse representation in pairs of bases [J]. IEEE Transactions on Information Theory, 2002, 48 (9): 2558-2567.
    [56] Fuchs J J. On sparse representations in arbitrary redundant bases [J]. IEEETransactions on Information Theory, 2004, 50 (6): 1341-1344.
    [57] Gribonval R, Nielsen M. Sparse representations in unions of bases [J]. IEEE Transactions on Information Theory, 2003, 49 (12): 3320-3325.
    [58] Gribonval R, Nielsen M. Highly sparse representations from dictionaries are unique and independent of the sparseness measure [J]. Applied and Computational Harmonic Analysis, 2007, 22 (3): 335-355.
    [59] Tropp J A. Just relax: Convex programming methods for identifying sparse signals in noise [J]. IEEE Transactions on Information Theory, 2006, 52 (3): 1030-1051.
    [60] Tsaig Y, Donoho D L. Breakdown of equivalence between the minimal l1-norm solution and the sparsest solution [J]. Signal Processing, 2006, 86 (3): 533-548.
    [61] Donoho D L. Neighborly polytopes and sparse solution of underdetermined linear equations [J/OL], 2005:1-21 [2008-12-24] http://www-stat.stanford.edu/~donoho/reports.html#.
    [62] Donoho D L, Elad M, Temlyakov V N. Stable recovery of sparse overcomplete representations in the presence of noise [J]. IEEE Transactions on Information Theory, 2006, 52 (1): 6-18.
    [63] Temlyakov V N. The best m-term approximation and greedy algorithms [J]. Advances in Computational Mathematics, 1998, 8 (3): 249-265.
    [64] Tropp J A. Greed is good: algorithmic results for sparse approximation [J]. IEEE Transactions on Information Theory, 2004, 50 (10): 2231-2242.
    [65] Tsaig Y. Sparse solution of underdetermined linear systems: algorithms and applications [D]. Stanford, Department of statistics, Stanford university, 2007.
    [66] Donoho D L, Tsaig Y. Fast Solution of l1-Norm Minimization Problems When the Solution May Be Sparse [J]. Information Theory, IEEE Transactions on, 2008, 54 (11): 4789-4812.
    [67] Donoho D L. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution [J]. COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (6): 797-829.
    [68] Candes E J, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information [J]. Information Theory,IEEE Transactions on, 2006, 52 (2): 489-509.
    [69] Donoho D L. High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension [J]. Discrete and Computational Geometry, 2006, 35 (4): 617-652.
    [70] Tropp J A, Gilbert A C. Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit [J]. Information Theory, IEEE Transactions on, 2007, 53 (12): 4655-4666.
    [71] Donoho D L. Compressed sensing [J]. Information Theory, IEEE Transactions on, 2006, 52 (4): 1289-1306.
    [72] Elad M. Optimized Projections for Compressed Sensing [J]. IEEE Transactions on Signal Processing, 2007, 55 (12): 5695-5702.
    [73] Blumensath T. Sparsify. 2007. http://www.see.ed.ac.uk/~tblumens/sparsify/.
    [74] Claerbout J. Hypertext Documents about Reproducible Research. 1994. http://sepwww.stanford.edu.
    [75] Buckheit J, Donoho D. Wavelab and reproducible research [J]. Lect. Notes Statist, 1995.
    [76] Donoho D L, Maleki A, Rahman I U, et al. Reproducible research in computational harmonic analysis [J]. Computing in Science & Engineering, 2009, 11: 8.
    [77] Duffin R J, Schaeffer A C. A class of nonharmonic Fourier series [J]. Transactions of the American Mathematical Society, 1952: 341-366.
    [78] Soo-Chang P, Min-Hung Y. An introduction to discrete finite frames [J]. Signal Processing Magazine, IEEE, 1997, 14 (6): 84-96.
    [79]陈公宁.矩阵理论与应用[M].北京:科学出版社, 2007.
    [80] Coppersmith D, Winograd S. Matrix multiplication via arithmetic progressions [C]//. ACM New York, NY, USA, 1987: 1-6.
    [81] Mallat S. A Wavelet Tour of Signal Processing [M]. Beijing: China Machine Press, 2002: 163-216.
    [82] Chan R H, Riemenschneider S D, Shen L, et al. Tight frame: an efficient way for high-resolution image reconstruction [J]. Applied and Computational Harmonic Analysis, 2004, 17 (1): 91.
    [83] Shen L, Papadakis M, Kakadiaris I A, et al. Image denoising using a tight frame [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (5): 1254-1263.
    [84] Shen Z. Tight Frame Approach for Missing Data Recovery [R]. 2008.
    [85] Rebollo-Neira L, Lowe D. Optimized orthogonal matching pursuit approach [J]. IEEE Signal Processing Letters, 2002, 9 (4): 137-140.
    [86] Tropp J A, Gilbert A C, Strauss M J. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit [J]. Signal Processing, 2006, 86 (3): 572-588.
    [87] Donoho D L, Elad M, Temlyakov V N. Stable recovery of sparse overcomplete representations in the presence of noise [J]. Information Theory, IEEE Transactions on, 2006, 52 (1): 6-18.
    [88] Karabulut G Z, Moura L, Panario D, et al. Integrating flexible tree searches to orthogonal matching pursuit algorithm [C]// IEE Proceedings on Vision, Image and Signal Processing. [s.n.], 2006: 538-548.
    [89] Schrijver A. Theory of linear and integer programming [M]. John Wiley & Sons Ltd, 1998.
    [90] Moon T K, Stirling W C. Mathematical Methods and Algorithms for Signal Processing [M]. Prentice Hall, 1999.
    [91] Harikumar G, Couvreur C, Bresler Y. Fast optimal and suboptimal algorithms for sparse solutions to linear inverse problems [C]// Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1998: 1877-1880.
    [92] Salomon B G, Ur H. Sparse approximations with a high resolution greedy algorithm [C]// Proceedings of the 2004 11th IEEE International Conference on Electronics, Circuits and Systems. IEEE, 2004: 330-333.
    [93] Cotter S F, Kreutz-Delgado K, Rao B D. Backward sequential elimination for sparse vector subset selection [J]. Signal Processing, 2001, 81 (9): 1849-1864.
    [94] Needell D, Vershynin R. Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit [J]. Foundations of Computational Mathematics.
    [95] Needell D, Vershynin R. Signal recovery from incomplete and inaccuratemeasurements via regularized orthogonal matching pursuit [J].
    [96] Davies M E, Blumensath T. Faster & greedier: algorithms for sparse reconstruction of large datasets [C]// 3rd International Symposium on Communications, Control and Signal Processing(ISCCSP). [s.n.], 2008: 774-779.
    [97] Mallat S, Hwang W L. Singularity detection and processing with wavelets [J]. Information Theory, IEEE Transactions on, 1992, 38 (2): 617-643.
    [98] Fischer S, Cristobal G, Redondo R. Sparse edge coding using overcomplete Gabor wavelets [C]// IEEE International Conference on Image Processing. IEEE, 2005: I-85-8.
    [99]孙玉宝,肖亮,韦志辉, et al.基于Gabor感知多成份字典的图像稀疏表示算法研究[J].自动化学报, 2008, 34 (11): 1379-1387.
    [100] Shewchuk J R. An introduction to the conjugate gradient method without the agonizing pain [J]. unpublished paper, 1994.
    [101] Hestenes M R, Stiefel E. Methods of conjugate gradients for solving linear systems [J]. Journal of Research of the National Bureau of Standards, 1952, 49 (6): 409-436.
    [102] Grochenig K. Acceleration of the frame algorithm [J]. IEEE Transactions on Signal Processing [see also IEEE Transactions on Acoustics, Speech, and Signal Processing], 1993, 41 (12): 3331-3340.
    [103] Choi S-C T. Iterative methods for singular linear equations and least-squares problems [D]. US, Stanford University, 2006.
    [104] Lanczos C. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators [J]. J. Res. Nat. Bur. Standards, 1950, 45 (4): 255-282.
    [105] Paige C C, Saunders M A. Solution of sparse indefinite systems of linear equations [J]. SIAM J. Numer. Anal, 1975, 12 (4): 617-629.
    [106] Paige C C, Saunders M A. LSQR: An algorithm for sparse linear equations and sparse least squares [J]. ACM Transactions on Mathematical Software (TOMS), 1982, 8 (1): 43-71.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700