用户名: 密码: 验证码:
压缩感知中优化投影矩阵的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
压缩感知(Compressed Sensing, CS)可以把稀疏信号以小于Shannon-Nyquist率的采样率恢复。近年来,压缩感知引起信号处理学界广泛的关注,在通信,图像处理,盲信号分离和模式识别等领域都有着广泛的应用。实际上,由于其重要的理论价值和广泛的应用前景,压缩感知一直是信号处理领域最热门的研究方向之一。
     在压缩感知模型中,投影矩阵和字典会影响到稀疏信号的恢复精度。二者的乘积被称为等价字典。传统意义上,一直采用随机矩阵作为投影矩阵,因为它几乎与所有的正交字典都是不相关的,这被证明在概率意义上是最优的。近几年来,研究人员通过设计优化的投影矩阵使其得到的等价字典具有较低的相关度,从而增强压缩感知的重构效果。而在一些新的信号稀疏模型中,当字典具有特殊结构时,对优化投影矩阵的设计也提出新的要求,比如高维稀疏误差校正模型和分布式压缩感知(Distributed Compressed Sensing, DCS)模型要求优化投影矩阵设计需要适应字典的特殊结构。本论文针对压缩感知中优化投影矩阵设计问题展开研究,主要贡献如下。
     1.提出一种基于粒子群优化(Particle Swarm Optimization, PSO)的优化投影矩阵设计算法,以增强基于压缩感知的高维稀疏误差校正的精度。基于压缩感知的CAB (Cross-And-Bouquet)模型由Wright J.等人提出用于降低稀疏误差校正的复杂度,这里使用的是随机高斯矩阵。为提高基于压缩感知的校正效果,本文以等价字典的平均互相关度(mutual coherence)最小化为目标函数,基于粒子群算法构造优化投影矩阵。该平均相关度一个是由Elad M.提出,但不适用于高维情形。另一个是本文专为高维情况下提出的。本文提出的优化投影矩阵设计算法无需高维奇异值分解。适用于CAB模型解决高维稀疏误差校正问题。最后,通过高维情形下的解码问题证实该算法的有效性。
     2.针对一般压缩感知模型,本文提出一种基于低秩自相关矩阵模型的优化投影矩阵设计算法。已有的研究提出把投影矩阵和字典的乘积矩阵接近于等角度紧框架(Equiangular Tight Frame, ETF)这一想法应用在优化投影矩阵设计问题中。通过引进低秩Gram矩阵模型实现这一想法,并且基于低秩矩阵接近问题,本文提出一种优化投影矩阵设计算法。通过基于稀疏表示的图像融合实验和图像去噪实验表明该算法在性能上优于已有的一些算法。
     3.DCS理论建立在多个信号具有联合稀疏表示(Joint Sparse Representation, JSR)这一基础之上。这些信号组成一个信号族。三种联合稀疏模型被提出:JSM-1,JSM-2和JSM-3。在JSM-1模型中,一个信号族中的所有信号具有共同的稀疏分量,每个信号拥有各自唯一的特有稀疏分量。联合稀疏表示比一般稀疏表示在处理多个信号上具有较低的计算复杂度。本文给出一种适用于JSM-1的字典学习算法MODJSR(Method of Optimal Directions for Joint Sparse Representation)。根据JSM-1的结构,其字典更新步骤只需一次特征值分解运算。该算法本质上是把基于稀疏表示的最优方向法(Method of Optimal Directions, MOD)推广到联合稀疏表示。MODJSR较常用的K-SVD(K-Singular Value Decomposition)具有更低的计算复杂度。为更有效地提取图像细节,本文把JSR推广到广义联合稀疏表示(Generalized Joint Sparse Representation, GJSR)。JSR下的信号族具有共同分量和唯一分量,这两个分量在同一个字典下表示系数是稀疏的,而GJSR下这两个分量在各自的字典下具有稀疏表示。MODJSR也被推广到MODGJSR (Method of Optimal Directions for Generalized Joint Sparse Representation)。在基于JSR的图像融合中,本文提出一种新的融合规则。MODJSR/MODGJSR可以同时完成有噪声源图像的字典学习、去噪和融合等过程。图像融合实验结果表明,本文提出的GJSR模型,MODJSR/MODGJSR字典学习算法和融合规则的优越性。
     4.基于GJSR,本文把分布式压缩感知推广到广义分布式压缩感知(Generalized Distributed Compressed Sensing, GDCS)。根据GDCS中等价字典的结构,本文把最小化互相关度问题归结于非凸函数的最小化问题,首次提出一种解决GDCS中的优化投影矩阵设计算法。该算法属于梯度下降法,其步长选择采用BB步长(Barzilai-Borwein stepsize)。该算法被推广到分块稀疏模型中。其有效性在合成信号实验和真实图像的融合实验中得以证实。
     综上所述,本文主要研究压缩感知中优化投影矩阵的设计算法。首先,针对高维稀疏误差校正模型提出一种优化投影矩阵设计算法。然后,针对一般压缩感知模型,提出一种基于低秩矩阵模型的优化投影矩阵设计算法。此外,本文提出一种联合稀疏表示模型下的字典学习算法,并把联合稀疏表示推广到广义联合稀疏表示。最后,本文推导出广义分布式压缩感知下的优化投影矩阵设计算法。同时,通过大量的仿真实验,验证了本文所提算法的有效性。
Compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. In recent years. CS has attracted widely attention of signal processing society. It has widely application in communication, image processing, blind source separation and pattern recognition. Practically, CS is one of the most popular research areas in signal processing society, due to its important theoretical value and widely application.
     In CS, projection matrix and dictionary influence the sparse recovery accuracy. Ran-dom projections were used since they present small coherence with almost any orthogonal dictionary. It was proved to be optimal in probabilistic sense. Recent researches show that optimizing the projection matrix toward decreasing the coherence is possible and can improve the reconstruction performance of CS. For a new sparse model, when the dictio-nary has special structure, the optimization of the projection matrix should be developed. For example, high-dimensional error correcting model and distributed compressed sens-ing (DCS) require the the optimization of the projection matrix should adapt the special structure of the dictionary. This thesis researches on the optimization of the-projection matrix in CS, the main contributions are as follows.
     1. To enhance the performance of CS-based high-dimensional sparse error correcting, PSO (Particle Swarm Optimization)-based algorithm to optimize the projection matrix is used to overcome the computational difficulty in high-dimensional cases. CS-based cross-and-bouquet (CAB) model was proposed by J. Wright et al. to reduce the com-plexity of sparse error correcting, where the random Gaussian projections are used. For the sake of leading to better performance of CS-based decoding for the CAB model, an algorithm is proposed in this thesis for constructing a well-designed projection matrix to minimize the average measures of mutual coherence. One was proposed by M. Elad, but it is not suitable for the high-dimensional cases. Another is proposed by this thesis for high dimensional cases. Using the equivalent dictionary, the dimensionality is reduced. Also, high-dimensional Singular Value Decomposition (SVD) is avoided in the procedure of constructing a well-designed projection matrix. The high-dimensional CAB model of sparse error correcting can be solved by the proposed algorithm without computational difficulty. At last, the validity of the proposed algorithm is illustrated by decoding ex-periments in high-dimensional cases.
     2. For the common CS model, this thesis gives a low-rank Gram matrix-based algorithm to optimize the projection matrix. Bring the multiplication of the projection matrix and the dictionary to be near an equiangular tight frame (ETF) was proposed as an idea in some previous works. Here, a low-rank Gram matrix model is introduced to realize it. Also, an algorithm is presented motivated from the computation method of the matrix nearness for low-rank cases. Simulations show that the proposed algorithm is better than some other algorithms to optimize the projection matrix in terms of image fusion and image denoising via sparse representation.
     3. The DCS theory rests on a new concept called multiple signals have joint sparse representation (JSR). These signals form a signal ensemble. Three joint sparsity models JSM-1, JSM-2and JSM-3were presented. In JSM-1model, all signals in one ensemble have a common sparse component, and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with the SR in dealing with multiple signals. This thesis proposed a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived employing the JSR structure with only once eigen value decomposition operation. Indeed, the MODJSR is the SR-based method of optimal directions (MOD) extended for the JSR. The MODJSR has lower complexity than the K-SVD algorithm which often used. To capture the image details more efficiently, this thesis extended the JSR to the GJSR (generalized joint sparse representation). The JSR models the common component and the innovation component by one dictionary while the GJSR depends on two dictionaries. The MODJSR is extended to MODGJSR in this case. For the JSR-based image fusion, this thesis gives a new fusion rule. MODJSR/MODGJSR can carry out dictionary learning, denoising and fusion of noisy source images, simultaneously. Some experiments on image fusion are given to demonstrate the validity of the proposed GJSR model, the MODJSR/MODGJSR and the new fusion rule.
     4. Based on the GJSR, the DCS is extended to generalized distributed compressed sensing (GDCS) in this letter. In this thesis, the minimization of the mutual coherence is summarized in the minimization of a non-convex function by the structure of the equiv-alent dictionary. Originally, this thesis proposed an algorithm to optimize the projection matrix for the GDCS using the structure of its equivalent dictionary. The algorithm belongs to the gradient method, its stepsize is chosen as Barzilai-Borwein stepsize. The algorithm is extended to block sparse model. The validity of the proposed algorithm is illustrated by some experiments for synthesized signals and real-world image fusion.
     Above all, the optimized projection matrix designing algorithms in CS are studied in this thesis. Firstly, an optimized projection matrix designing algorithm is given for CS-based high-dimensional sparse error correcting model. Secondly, a low-rank model based optimized projection matrix designing algorithm is proposed for the common CS model. Moreover, this thesis gives a dictionary learning method for the JSR. The JSR is extended to the GJSR. At last, an algorithm to optimize the projection matrix for the GDCS is proposed. Simultaneously, numerical experiments are given to illustrate the efficiency of the proposed algorithms in this thesis.
引文
[1]Bruckstein A.M., Donoho D.L., Elad M., From Sparse Solutions of Systems of Equa-tions to Sparse Modeling of Signals and Images [J]. SIAM Review,2009,51(1): 34-81.
    [2]Wright J., Error Correction for High-Dimensional Data via Convex Optimization [D]. New Jersey; New Brunswick,2009.
    [3]Olshausen B.A., Field D.J., Emergence of Simple-Cell Receptive Field Properties by Learning A Sparse Code for Natural Images [J]. Nature.1996.381(6583):560-607.
    [4]Boyd S., Vandenberghe L., Convex Optimization [M]. Cambridge University Press, 2004.
    [5]Candes E. J., Tao T., Decoding by Linear Programming [J]. IEEE Trans. Inf. Theory, 2005,51(12):4203-4215.
    [6]Fu Y., Zhang Q., Xie S., Compressed Sensing for Sparse Error Correcting Model [J]. Circuits, Systems, and Signal Process.,2013,32(5):2371-2383.
    [7]Donoho D. L., Elad M., Optimally Sparse Representation in General (nonorthogonal) Dictionaries via h minimization [J]. P Natl Acad Sci USA,2003,100(5):2197-2202.
    [8]Aharon M., Elad M., Bruckstein A. M., On The Uniqueness of Overcomplete Dic-tionaries, and A Practical Way To Retrieve Them [J]. Linear Algebra and Its Appli-cations,2006,416(1):48-67.
    [9]Fadili M. J., Starck J. L., Bobin J., et al. Image Decomposition and Separation Using Sparse Representations:An Overview [J]. Proceedings of the IEEE,2010, 98(6):983-994.
    [10]Juditsky A., Nemirovski A., On Verifiable Sufficient Conditions for Sparse Signal Recovery via l1 minimization [J]. Mathematical Programming,2011,127(1):57-88.
    [11]赵瑞珍,刘晓宇,ChingChung L I, et al.基于稀疏表示的小波去噪[J].中国科学:信息科学,2010,40(1):33-40.
    [12]Tsaig Y., Donoho D.L., Extensions of Compressed Sensing [J]. Signal Process.,2006, 86(3):549-571.
    [13]Candes E. J., Wakin M. B., An Introduction to Compressive Sampling [J]. IEEE Signal Process. Mag.,2008,25(2):21-30.
    [14]Donoho D. L., Compressed Sensing [J]. IEEE Trans. Inf. Theory,2006,52(4):1289-1306.
    [15]石光明,刘丹华,高大化等,压缩感知理论及其研究进展[J].电子学报,2009,37(05):1070-1081.
    [16]焦李成,杨淑媛,刘芳等,压缩感知回顾与展望[J].电子学报,2011,39(07):1651-1622.
    [17]Donoho D. L., For Most Large Underdetermined Systems of Linear Equations The Minimal l1-norm Solution Is Also The Sparsest Solution [J]. Communications on Pure and Applied Mathematics,2006,59(6):797-829.
    [18]Candes E.J., Romberg J., Tao T., Robust Uncertainty Principles:Exact Signal Re-construction from Highly Incomplete Frequency Information [J]. IEEE Trans. Inf. Theory,2006,52(2):489-509.
    [19]Rauhut H., Schnass K., and Vandergheynst P., Compressed Sensing and Redundant Dictionaries [J]. IEEE Trans. Inf. Theory,2008,54(5):2210-2219.
    [20]Candes E. J., Eldar Y. C., Needell D., Compressed Sensing with Coherent and Redun-dant Dictionaries [J]. Applied and Computational Harmonic Analysis,2011,31(1), 59-73.
    [21]Lustig M., Donoho D. L., Santos J. M., et al. Compressed Sensing MRI [J]. IEEE Signal Process. Mag.,2008,25(2):72-82.
    [22]Laska J. N., Wen Z., Yin W., et al. Trust, But Verify:Fast and Accurate Signal Recovery From 1-Bit Compressive Measurements [J]. IEEE Trans. Signal Process., 2011,59(11):5289-5301.
    [23]Lee H., Battle A., Raina R., et al. Efficient Sparse Coding Algorithms [C]. Advances in neural information processing systems,2006:801-808.
    [24]朱明,高文,郭立强.压缩感知理论在图像处理领域的应用[J].中国光学,2011,4(5):441-447.
    [25]Wright J., Ma Y., Mairal J., et al. Sparse Representation for Computer Vision and Pattern Recognition [J]. Proceedings of the IEEE,2010,98(6):1031-1044.
    [26]Zibulevsky M., Elad M., L1-L2 Optimization in Signal and Image Processing [J]. IEEE Signal Process. Mag.,2010,27(3):76-88.
    [27]Elad M. and Aharon M., Image Denoising via Sparse and Redundant Representations over Learned Dictionaries [J]. IEEE Trans. Image Process.,2006,15(12):3736-3745.
    [28]Agarwal S., Atwan A., Roth D., Learning To Detect Objects in Images via A Sparse, Part-based Representation, IEEE Trans. Pattern Anal. Mach. Intell.,2004,26(11): 1475-1490.
    [29]Wright J., Yang A. Y., Ganesh A., et al. Robust Face Recognition via Sparse Rep-resentation [J]. IEEE Trans. Pattern Anal. Mach. Intell.,2009,31(2):210-227.
    [30]He R., Zheng W., Hu B., et al. Two-Stage Nonnegative Sparse Representation for Large-Scale Face Recognition [J]. IEEE Trans. Neural Networks and Learning Sys-tems,2013,24(1):35-46.
    [31]Yang A. Y., Sastry S. S., Ganesh A., et al. Fast l1-minimization Algorithms and An Application In Robust Face Recognition:A Review [C], Image Processing (ICIP), 17th IEEE International Conference on. IEEE,2010:1849-1852.
    [32]Zhang H., Nasrabadi N. M., Zhang Y., et al. Multi-View Automatic Target Recog-nition Using Joint Sparse Representation [J]. IEEE Trans. Aerospace and Electronic Systems,2012,48(3):1074-1082.
    [33]Zhang R., Wang C, Xiao B., A Strategy of Classification via Sparse Dictionary Learned by Non-negative K-SVD [C]. Proc. Int'l Conf. Computer Vision,2009.
    [34]Jiang Z., Lin Z., Davis L. S., Learning a Discriminative Dictionary for Sparse Coding via Label Consistent K-SVD [C]. Proc. Int'l Conf. Computer Vision and Pattern Recognition,2011.
    [35]Dikmen M. and Huang T.S., Robust Estimation of Foreground in Surveillance Videos by Sparse Error Estimation [C]. International Conference on Pattern Recognition, 2008.
    [36]Dikmen, M., Tsai S.F., Huang T.S., Base Selection in Estimating Sparse Foreground in Video [C]., IEEE International Conference on Image Processing,2009.
    [37]Zhao C., Wang X., and Cham W., Background Subtraction via Robust Dictionary Learning [J]. EURASIP J. Image and Video Processing,2011.
    [38]Xue G., Song L., Sun J., et al. Foreground Estimation based on Robust Linear Regression Model [C]. Proc. Int'1 Conf. Image Processing (2011).
    [39]Cevher V., Sankaranarayanan A., Duarte M. F., et al. Compressive Sensing for Background Subtraction [C]. Computer Vision-ECCV 2008:155-168.
    [40]Yan J., Zhu M., Liu H., Liu Y., Visual Saliency Detection via Sparsity Pursuit [J]. IEEE Signal Process. Lett.,2010,17(8):739-742.
    [41]Li Y., Amari S., Cichocki A., et al. Underdetermined Blind Source Separation based on Sparse Representation [J]. IEEE Trans. Signal Process.,2006,54(2):423-437.
    [42]Zibulevsky M., Pearlmutter B. A., Blind Source Separation by Sparse Decomposition in A Signal Dictionary [J]. Neural Computation,2001,13 (4):863-882.
    [43]Casanovas A.L., Monaci G., Vandergheynst, P., et al. Blind Audiovisual Source Sep-aration Based on Sparse Redundant Representations [J]. IEEE Trans. Multimedia, 2010,12(5):358-371.
    [44]O'Grady P. D., Pearlmutter B. A., Rickard S.T., Survey of Sparse and Non-Sparse Methods in Source Separation [J]. International Journal of Imaging Systems and Technology,2005,15(1):18-33.
    [45]Moudden Y., Bobin J., Hyperspectral BSS Using GMCA with Spatio-Spectral Spar-sity Constraints [J]. IEEE Trans. Image Process.,2011,20(3):872-879
    [46]Ma J., Le Dimet F. X., Deblurring From Highly Incomplete Measurements for Re-mote Sensing [J]. IEEE Trans. Geosci. Remote Sens.,2009,47(3):792-802.
    [47]Baraniuk R., Davenport M., DeVore R., et al. A Simple Proof of The Restricted Isom-etry Property for Random Matrices [J]. Constructive Approximation,2008,28(3): 253-263.
    [48]Elad M., Optimized Projections for Compressed Sensing [J]. IEEE Trans. Signal Process.,2007,55(12):5695-5702.
    [49]Huang H. and Makur A., Optimized Measurement Matrix for Compressive Sensing [C]. SampTA 2011.
    [50]Duarte-Carvajalino J. M., Sapiro G., Learning to Sense Sparse Signals:Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization [J]. IEEE Trans. Image Process.,2009,18(7):1395-1408.
    [51]Abolghasemi V., Jarchi D., and Sanei S., A Robust Approach for Optimization of The Measurement Matrix in Compressed Sensing [C]. Cognitive Information Processing (CIP),20102nd International Workshop on.,388-392,2010.
    [52]Xu J., Pi Y., and Cao Z., Optimized Projection Matrix for Compressive Sensing [J]. EURASIP Journal on Advances in Signal Process., doi:10.1155/2010/560349,2010.
    [53]Chen W., Rodrigues M. R. D., and Wassell I. J., On The Design Of Optimized Pro-jections For Sensing Sparse Signals in Overcomplete Dictionaries [C]. in Proc.IEEE ICASSP Conf.,2012,3:1861-1864.
    [54]Abolghasemi V., Ferdowsi S., and Sanei S., A Gradient-based Alternating Minimiza-tion Approach for Optimization of The Measurement Matrix in Compressive Sensing [J]. Signal Process.,2012,92:999-1009.
    [55]Zelnik-Manor L., Rosenblum K., and Eldar Y. C., Sensing Matrix Optimization for Block-Sparse Decoding [J]. IEEE Trans. Signal Process.,2011,59(9):4300-4312.
    [56]Chen W., Rodrigues M. R. D., and Wassell I. J., Projection Design for Statistical Compressive Sensing:A Tight Frame Based Approach [J]. IEEE Trans. Signal Pro-cess.,2013,61(8):2016-2029.
    [57]Li G., Zhu Z., Yang D., et al. On Projection Matrix Optimization for Compressive Sensing Systems [J]. IEEE Trans. Signal Process.,2013,61(11):2887-2898.
    [58]Haupt J., Bajwa W., Raz G., et al. Toeplitz Compressed Sensing Matrices with Ap-plications to Sparse Channel Estimation [J]. IEEE Trans. Inf. Theory,2010,56(11): 5862-5875.
    [59]Wang Z., Arce G. R., Paredes J. L., Colored Random Projections for Compressed Sensing [C]. in Proc. IEEE ICASSP Conf.,2007,3:873-876.
    [60]Yin W., Morgan S., Yang J., et al. Practical Compressive Sensing with Toeplitz and Circulant Matrices [C]. in Proc. of Visual Communications and Image Processing (VCIP),2010.
    [61]Xu Y., Yin W., and Osher S., Learning Circulant Sensing Kernels [R]. Rice CAAM technical report 12-05,2012.
    [62]Candes E.J., The Restricted Isometry Property and Its Implications for Compressed Sensing [J]. Comptes Rendus Mathematique,2008,346(9):589-592.
    [63]Calderbank R., Howard S., Jafarpour S., Construction of A Large Class of Deter-ministic Sensing Matrices That Satisfy A Statistical Isometry Property [J]. IEEE J. Sel. Topics Signal Process.,2010,4(2):358-374.
    [64]Gleichman S., Eldar Y. C., Blind Compressed Sensing [J]. IEEE Trans. Inf. Theory, 2011,57(10):6958-6975.
    [65]Zare A., Gader P., Gurumoorthy K. S., Directly Measuring Material Proportions Using Hyperspectral Compressive Sensing [J]. IEEE Geosci. Remote Sens. Lett., 2012,9(3):323-327.
    [66]肖小潮,郑宝玉,王臣昊.基于最优观测矩阵的压缩信道感知[J].信号处理,2012,28(1):67-72.
    [67]Hurley N., Rickard S., A Visual Tour of Frames and Bases [J]. Submitted to IEEE Signal Process. Mag.,2009.
    [68]Tropp J.A., Dhillon I.S., Heath R.W., et al. Designing Structured Tight Frames via Alternating Projection [J]. IEEE Trans. Inf. Theory,2005,51(1):188-209
    [69]Sustik M. A., Tropp J. A., Dhillon I. S., et al. On The Existence of Equiangular Tight Frames [J]. Linear Algebra and Its Applications,2007,426(2-3):619-635.
    [70]Strohmer T., Heath R. W., Grassmannian Frames with Applications to Coding and Communication [J]. Applied and Computational Harmonic Analysis,2003,14(3): 257-275.
    [71]Higham N., Computing The Nearest Correlation Matrix-A Problem From Finance [J]. IMA J. Numer. Anal.,2002,22:329-343.
    [72]Bi S., Han L. and Pan S., Approximation of Rank Function and Its Application to The Nearest Low-Rank Correlation Matrix [J]. Journal of Global Optimization, Nov.2012:1-25.
    [73]Li Q. and Qi H., A Sequential Semismooth Newton Method for The Nearest Low-Rank Correlation Matrix Problem [R]. technical report, University of Southampton, Sep.2009.
    [74]Qi H. and Sun D., A Quadratically Convergent Newton Method for Computing The Nearest Correlation Matrix [J]. SI AM Journal on Matrix Analysis and Applications, 2006,28:360-385.
    [75]Wen Z. and Yin W., A Feasible Method For Optimization With Orthogonality Con-straints [J]. Mathematical Programming,2013:1-38.
    [76]Zhang Y., Recent Advances in Alternating Direction Methods:Practice and Theory [R]. Tutorial,2010.
    [77]Mu Y., Dong J., Yuan X., et al. Accelerated Low-Rank Visual Recovery by Random Projection [C]. Proc. Int'l Conf. Computer Vision and Pattern Recognition,2011.
    [78]Liu Y., Jiao L. and Shang F., An Efficient Matrix Factorization Based Low-Rank Representation For Subspace Clustering [J]. Pattern Recognition,2013,46(1):284-292.
    [79]Liu R., Lin Z., Wei S., et al. Solving Principal Component Pursuit in Linear Time via h Filtering, ArXiv e-prints,2011, http://arxiv.org/pdf/1108.5359v4.pdf
    [80]Kennedy J., Eberhart R., Particle Swarm Optimization [C]. in Proc. IEEE Int. Conf. Neural Networks, Perth, Australia,1995:1942-1948.
    [81]Shi Y., Eberhart R., A Modified Particle Swarm Optimizer [C]. in Proc. IEEE Int. Conf. Evolutionary Computation, Anchorage, AK,1998:69-73.
    [82]Naka, S., Genji T., Yura T., et al. A Hybrid Particle Swarm Optimization for Dis-tribution State Estimation [J]. IEEE Trans. Power Systems,2003,18(1):60-68.
    [83]Wright J., Ma Y., Dense Error Correction via l1 Minimization [J]. IEEE Trans. Inf. Theory,2010,56(7):3540-3560.
    [84]Donoho D., Elad M., and Temlyakov V., Stable Recovery of Sparse Overcomplete Representations in The Presence of Noise [J]. IEEE Trans. Inf. Theory,2006,52(1): 6-18.
    [85]Gribonval R. and Nielsen M., Sparse Representations in Unions of Bases [J]. IEEE Trans. Inf. Theory,2003,49(12):3320-3325.
    [86]Mallat S. G., Zhang Z., Matching Pursuits with Time-Frequency Dictionaries [J]. IEEE Trans. Signal Process.,1993,41(12):3397-3415.
    [87]Tropp J. A., Gilbert A. C., Signal Recovery from Random Measurements via Or-thogonal Matching Pursuit [J]. IEEE Trans. Inf. Theory,2007,53(12):4655-4666.
    [88]Dai W. and Milenkovic O., Subspace Pursuit for Compressive Sensing Signal Recon-struction [J]. IEEE Trans. Inf. Theory,2009,55(5):2230-2249.
    [89]Huang H., Makur, A., Backtracking-Based Matching Pursuit Method for Sparse Signal Reconstruction [J]. IEEE Signal Process. Lett.,2011,18(7):391-394.
    [90]Chen S.B., Donoho D. L., Saunders M. A. Atomic Decomposition by Basis Pursuit [J]. SIAM Review,2001,43(1):129-159.
    [91]Tibshirani R., Regression Shrinkage and Selection via The Lasso:A Retrospective [J]. J R Stat Soc Ser B-Stat Methodol,2011,73(2):73-82.
    [92]Figueiredo M. A. T., Nowak, R.D., Wright S.J., Gradient Projection for Sparse Reconstruction:Application to Compressed Sensing and Other Inverse Problems [J]. IEEE J. Sel. Topics Signal Process.,2007,1(4):586-597.
    [93]Hale E. T., Yin W., et al. Fixed-Point Continuation for l1-minimization:Methodol-ogy and Convergence [J]. SIAM Journal on Optimization,2008,19(3):1107-1130.
    [94]Yin W., Osher S., Goldfarb D., et al. Bregman Iterative Algorithms for l1-Minimization with Applications to Compressed Sensing [J]. SIAM Journal on Imag-ing Science,2008,1(1):143-168.
    [95]Beck A., Teboulle M., A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J]. SIAM Journal on Imaging Science,2009,2(1):183-202.
    [96]Becker S., Bobin J., Candes E. J., NESTA:A Fast and Accurate First-Order Method for Sparse Recovery [J]. SIAM Journal on Imaging Science,2011,4(1):1-39.
    [97]Yang J., Zhang Y., Alternating Direction Algorithms For l1-Problems In Compressive Sensing [J]. SIAM Journal on Science Computing,2011,33 (1):250-278.
    [98]Gorodnitsky I.F., Rao B.D., Sparse Signal Reconstruction from Limited Data Us-ing FOCUSS:A Re-weighted Minimum Norm Algorithm [J]. IEEE Trans. Signal Process.,1997,45(3):600-616.
    [99]He Z., Cichocki A., Zdunek R., et al. Improved FOCUSS Method with Conjugate Gradient Iterations [J]. IEEE Trans. Signal Process.,2009,57(1):399-404.
    [100]Mohimani H., Babaie-Zadeh M., and Jutten C., A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed l0 Norm [J]. IEEE Trans. Signal Process., 2009,57(1):289-301.
    [101]Hyder M. M. and Mahata K., An Improved Smoothed l0 Approximation Algorithm for Sparse Representation [J]. IEEE Trans. Signal Process.,2010,58(4):2194-2205.
    [102]Xu Z., Zhang H., Yao W., et al. L1/2 Regularization [J]. SCIENCE CHINA Infor-mation Sciences,2010,53(6):1159-1169.
    [103]Hadi Z., Massoud B.Z., and Jutten C., An Iterative Bayesian Algorithm for Sparse Component Analysis in Presence of Noise [J]. IEEE Trans. Signal Process.,2009, 57(11):4378-4390.
    [104]Baron, D., Sarvotham, S., Baraniuk, R.G., Bayesian Compressive Sensing via Belief Propagation [J]. IEEE Trans. Signal Process.,2010,58(1):269-280.
    [105]Mallat S., A Wavelet Tour of Signal Processing:The Sparse Way,3rd Ed. [M]. Academic Press, NY,2008.
    [106]Yang J., Wright J., Huang T., et al. Image Super-Resolution via Sparse Represen-tation [J]. IEEE Trans. Image Process.,2010,19(11):2861-2873.
    [107]Divekar A., Ersoy O., Image Fusion by Compressive Sensing [C].17th International Conference on Geoinformatics. IEEE,2009:1-6.
    [108]Zhu X., Wang X., Bamler R., Compressive Sensing for Image Fusion-with Applica-tion to Pan-Sharpening [C]. Geoscience and Remote Sensing Symposium (IGARSS), IEEE,2011:2793-2796.
    [109]Li S. and Yang B., A New Pan-Sharpening Method Using A Compressed:Sensing Technique [J]. IEEE Trans. Geosci. Remote Sens.,2011,49(2):738-746.
    [110]Jiang C., Zhang H., Shen H., et al. A Practical Compressed Sensing-based Pan-Sharpening Method [J]. IEEE Geosci. Remote Sens. Lett.,2012,9(4):629-633.
    [111]Li Z., Leung H., Fusion of Multispectral and Panchromatic Images Using a Restoration-Based Method [J]. IEEE Trans. Geosci. Remote Sens.,2009,47(5): 1482-1491.
    [112]Guha T., Ward R. K., Learning Sparse Representations for Human Action Recog-nition [J]. IEEE Trans. Pattern Anal. Mach. Intell,2012,34(8):1576-1588.
    [113]李树涛,魏丹.压缩传感综述[J].自动化学报,2009,35(11):1369-1377.
    [114]戴琼海,付长军,季向阳.压缩感知研究[J].计算机学报,2011,34(3):425-434.
    [115]蔡泽民,赖剑煌.一种基于超完备字典学习的图像去噪方法[J].电子学报,2009,37(2):347-350.
    [116]Lesage S., Gribonval R., Bimbot F., et al. Learning Unions of Orthonormal Bases with Thresholded Singular Value Decomposition [C]. Int'l Conf. Audio, Speech and Signal Process.,2005.
    [117]Dremeau A., Herzet C, An EM-algorithm Approach for The Design of Orthonormal Bases Adapted to Sparse Representations [C]. Int'l Conf. Acoustics Speech and Signal Process.,2010.
    [118]Lewicki M.S. and Sejnowski T. J., Learning Overcomplete Representations [J]. Neu-ral Computation,2000,12(2):337-365.
    [119]Engan, K., Aase, S.O. and Hakon Husoy J., Method of Optimal Directions for Frame Design [C]. Int'l Conf. Audio, Speech and Signal Process.,1999.
    [120]Aharon M., Elad M. and Bruckstein A., K-SVD:An Algorithm for Designing Over-complete Dictionaries for Sparse Representation [J]. IEEE Trans. Signal Process., 2006,54(11):4311-4322.
    [121]Yaghoobi M., Blumensath T. and Davies M.E., Dictionary Learning for Sparse Approximations with The Majorization Method [J]. IEEE Trans. Signal Process., 2009,57(6):2178-2191.
    [122]Skretting K., Engan K., Recursive Least Squares Dictionary Learning Algorithm [J]. IEEE Trans. Signal Process.,2010,58(4):2121-2130.
    [123]Dobigeon N., Tourneret J.Y., Bayesian Orthogonal Component Analysis for Sparse Representation [J]. IEEE Trans. Signal Process.,2010,58(5):2675-2685.
    [124]Gowreesunker B. V., Tewfik A. H., Learning Sparse Representation using Iterative Subspace Identification [J]. IEEE Trans. Signal Process.,2010,58(6):3055-3065.
    [125]Yaghoobi M., Daudet, L., Davies, M.E., Parametric Dictionary Design for Sparse Coding [J]. IEEE Trans. Signal Process.,2009,57(12):4800-4810.
    [126]Mairal J., Bach F., Ponce J., et al. Online Learning for Matrix Factorization and Sparse Coding [J]. The Journal of Machine Learning,2010:19-60.
    [127]Smith L.N., Elad M., Improving Dictionary Learning:Multiple Dictionary Updates and Coefficient Reuse [J]. IEEE Signal Process. Lett.,2013,20(1):79-82.
    [128]Landweber L., An Iterative Formula for Fredholm Integral Equations of The First Kind [J]. Amer. J. Math.,1951,73(2):615-624.
    [129]Anderson E., Bai Z., Bischof C., et al. LAPACK User's Guide,3rd ed. Philadel-phia, PA:SIAM,1999. [Online]. Available:http://www.netlib.org/lapack/lug/.
    [130]Baron D., Wakin M. B., Duarte M. F., et al. Distributed Compressed Sensing, 2006 [Online]. Available:http://dsp.rice.edu/publications/distributed-compressed-sensing.
    [131]Yang A. Y., Gastpar M., Bajcsy R., et al. Distributed Sensor Perception via Sparse Representation [J]. Proceedings of the IEEE,2010,98(6):1077-1088.
    [132]Yang A. Y., Maji S., Christoudias C. M., et al. Multiple-View Object Recognition in Band-Limited Distributed Camera Networks [C]. Distributed Smart Cameras. Third ACM/IEEE International Conference on.,2009:1-8.
    [133]Huang J., Huang X., Metaxas D., Learning with Dynamic Group Sparsity [C]. IEEE 12th International Conference on Computer Vision,2009:64-71.
    [134]Huang J., Zhang T., The Benefit of Group Sparsity [J]. The Annals of Statistics, 2010,38(4):1978-2004.
    [135]Zou J., Fu Y., Xie S., A Block Fixed Point Continuation Algorithm for Block-Sparse Reconstruction [J]. IEEE Signal Process. Lett.,2012,19(6):364-367.
    [136]Eldar Y.C., Kuppinger P., and Bolcskei H., Block-Sparse Signals:Uncertainty Rela-tions and Efficient Recovery [J]. IEEE Trans. Signal Process.,2010,58(6):3042-3054.
    [137]Luo X., Zhang J., Yang J., et al. Image Fusion in Compressed Sensing [C]. Image Processing (ICIP),16th IEEE International Conference on.,2009:2205-2208.
    [138]Li H., Manjunath B. S., and Mitra S.K., Multisensor Image Fusion Using the Wavelet Transform [J]. Graph. Models Image Process.,1995,57(3):235-245.
    [139]Rockinger O., Image Sequence Fusion Using A Shift-Invariant Wavelet Transform [C]. Proc. Int'l Conf. Image Process.,1997.
    [140]Yang B., Li S., Multifocus Image Fusion and Restoration With Sparse Representa-tion [J]. IEEE Trans. Instrumentation and Measurement,2010,59(4):884-891.
    [141]Yu N., Qiu T., Bi F., et al. Image Features Extraction and Fusion Based On Joint Sparse Representation [J]. IEEE J. Sel. Topics Signal Process.,2011,5(5):1074-1082.
    [142]Yin H. and Li S., Multimodal Image Fusion with Joint Sparsity Model [J]. Optical Engineering,2011,50(6):067007.
    [143]Zhang Q., Fu Y., Li H., et al. Dictionary Learning Method For Joint Sparse Representation-Based Image Fusion [J]. Optical Engineering,2013,52(5):057006.
    [144]Piella G. and Heijmans H., A New Quality Metric for Image Fusion [C]. Proc. Int'l Conf. Image Processing,2003.
    [145]Xydeas C. and Petrovic V., Objective Image Fusion Performance Measure [J]. Elec-tron. Lett.,2000,36(4):308-309.
    [146]Liu Z., Blasch E., Xue Z., et al. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision:A Comparative Study [J]. IEEE Trans. Pattern Anal. Mach. Intell.,2012,34(1):94-109.
    [147]孙即祥,图像处理[M],科学出版社,2009.
    [148]杨斌,像素级多传感器图像融合新方法研究[D],湖南大学博士学位论文,2010.
    [149]Wang Z., Bovik A C. A Universal Image Quality Index [J]. IEEE Signal Process. Lett.,2002,9(3):81-84.
    [150]http://www.ux.uis.no/-karlsk/dle/USCimages_bmp.zip
    [151]The Image Fusion Server [DB]. Available:http://www.imagefusion.org/.
    [152]SparseLab 2.1-Core [CP]. Available:http://sparselab.stanford.edu/.
    [153]Elad M., Personal Page [EB]. Available:http://www.cs.technion.ac.il/~elad/ software/.
    [154]Candes E.J., Romberg J., L1-MAGIC [CP]. Available:http://users.ece. gatech.edu/~justin/llmagic/.
    [155]Horn R. A. and Johnson C. R., Matrix Analysis [M]. Cambridge, U.K.:Cambridge Univ. Press,1985.
    [156]Nocedal J. and Wright S.J., Numerical Optimization [M]. Springer, New York,2006.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700