用户名: 密码: 验证码:
几类非光滑优化的交替线性化算法
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近年来,非光滑优化领域中的很多学者专注于通过探索目标函数的具体结构来设计或者改进算法.交替线性化算法就是基于这种思想根据目标函数为两个函数和的结构所设计的一种近似迫近点算法.在现实中很多实际问题都可以转化为极小化两个函数和的优化问题,例如最优控制、图像处理、信号恢复、网络推荐系统、计算机视觉、系统辨识、生物信息统计等等.因此,研究这类求解两个函数和问题的算法,同时具有理论意义与实用价值.
     目前,求解两个函数和的非光滑优化方法主要有算子分裂法与交替方向法.这两类算法非常适于求解当目标函数中的非光滑函数的迫近算子具有解析形式时.特别地,交替方向法成为求解图像处理、压缩感知等实际问题的快速高效算法.然而,当非光滑函数的迫近算子不具备解析形式时,这些算法在实际计算中很难实现.本文所研究的求解这类非光滑问题的交替线性化算法,同时考虑了求解这种形式的非光滑问题.论文所阐述的主要结果可概括如下:
     1.第二章研究求解两个非光滑凸函数和的非精确交替线性化数值算法.交替线性化算法每次迭代都需要精确求解两个强凸子问题,由于某些情况下不能精确求解,故算法不能执行.我们采用近似最优解代替,并能证明在某些条件下,通过交替求解的近似解收敛到原问题的最优解.
     2.第三章考虑求解一类双层凸规划问题的交替线性化算法.基于精确罚函数的思想将双层规划问题转化为一般的单层优化问题,转化后的优化问题的目标函数为两个凸函数的和,可以直接构造交替线性化算法求解,通过收敛性分析和数值试验可以验证算法的有效性.
     3.第四章求解一类具有光滑结构的凸函数和的交替线性化算法,本章针对问题的具体结构基于交替线性化算法的基本思想通过构造新的子问题的模型改进算法,并进行收敛性分析与数值试验,验证了算法的可执行性.
     4.第五章求解极小化两个非凸非光滑函数和的交替线性化算法.根据目标函数的性质和结构,将原问题转化为两个子问题,每个子问题的形式是将其中一个凸化后的函数线性化,然后交替求解.我们同时讨论了算法收敛到原问题的一个稳定点并进行了基本的数值试验.
Recently many researchers focus on exploiting substructure of the objective function as a way to design or improve numerical algorithms in nonsmooth optimization fields. Alternating linearization algorithm for nonsmooth optimization is presented based on this idea for minimiz-ing the structured optimization problem which the objective function is the sum of two functions and it can be viewed as a class of approximate proximal point method. Many practical problems in applications such as optimal control, image processing, signal recovery, web recommenda-tion system, computer vision, system identification, bioinformatics and statistics and so on can be formulated as the summation of two functions. Therefore the algorithm research on solving these programs possesses theoretical significance and practical value.
     There are operator splitting methods and alternating linearization methods for minimizing the sum of two functions. When the proximal operators of the objective functions are analyt-ical, these methods are suited. Specially, alternating direction methods are fast and efficient methods in image processing, signal recovery and so on. However, these methods may be im-possible when the proximal operator is not available. The subject of this dissertation is study on alternating linearization algorithm for solving these nonsmooth optimizations including that the analytic proximal operator of each objective function is not existed. The main results may be summarized as follows:
     1. Chapter2presents an inexact approximate alternating linearization numerical algo-rithms for solving the sum of two convex functions. Alternating linearization algorithm for nonsmooth optimization involves solving two strongly convex subprograms and when the opti-mal solution can't be obtained the algorithm fails. We take approximate solutions instead and prove that the approximate algorithm converge to a solution of the original problem.
     2. In Chapter3we construct an alternating linearization algorithm for nonsmooth convex optimization to solve a class of bilevel programming problem. Firstly with the help of penalty function the bilevel programming is converted to a general one level optimization problem which the objective function is the sum of two convex functions. Convergence analysis and numerical experiments validate the efficiency of the algorithm.
     3. Chapter4consider a nonsmooth optimization problem with smooth substructure whose objective is the sum of a convex function and a continuously differentiable function. We design a new algorithm for solving this class of optimization problem. We make convergence analysis and carry out some numerical tests to verify the implementation of the algorithm.
     4. Chapter5consider minimizing a summation of two nonconvex nonsmooth functions. Based on the structure of the objective function, the original problem is exploded into an alter-nate generation of two models, each one being characterized by the linearization of one of the convexified functions. We also discuss the convergence properties of the method to a stationary point and carry out some numerical results of its implementation.
引文
[1]Rockafellar R. T. Lecture Notes on Optimization Under Uncertainty [M]. University of Wasington,1970.
    [2]Shapiro A, Dentcheva D, Ruszczyeski A. Lectures on stochastic programming:modeling and theory [M]. Society for Industrial and Applied Mathematics,2009.
    [3]Scherer C. A full block S-procedure with applications [C]. Proc. IEEE Conf. on decision and Control, San Diego, USA,1997,2602-2607.
    [4]Scherer C. Robust mixed control and linear parameter-varying control with full block scalings. [C]. SIAM Advances in Linear Matrix Inequality Methods in Control Series, L. El Ghaoui, S.I. Niculescu (eds.),2000.
    [5]Apkarian P., Noll D., Tuan H.D. A prototype primal-dual LMI interior point algorithm for non-convex robust control problems [J]. Rapport interne 01-08 (2001), UPS-MIP, CNRS UMR 5640.
    [6]Fares B., Noll D., Apkarian P. An augmented Lagrangian method for a class of LMI-constrained problems in robust control theory [J]. International Journal of Control.2001, 74(4):348-360.
    17] El Ghaoui L, Balakrishnan V. Synthesis of fixed-structure controllers via numerical opti-mization[J] Decision and Control,1994., Proceedings of the 33rd IEEE Conference on. IEEE,1994,3:2678-2683.
    [8]Noll D, Apkarian P. Spectral bundle methods for non-convex maximum eigenvalue func-tions:first-order methods [J]. Mathematical programming,2005,104(2):701-727.
    [9]Engl H W, Hanke M, Neubauer A. Regularization of inverse problems [M]. Kluwer Aca-demic Pub,1996.
    [10]Tibshirani R. Regression shrinkage and selection via the lasso [J]. Journal of the Royal Statistical Society. Series B (Methodological),1996:267-288.
    [11]W. Fu. Penalized regressions:the bridge vs. the lasso. Journal of Computational and Graphical Statistics,1998,7(3):397-416.
    [12]Figueiredo M A T, Nowak R D. An EM algorithm for wavelet-based image restoration [J]. Image Processing, IEEE Transactions on,2003,12(8):906-916.
    [13]Daubechies I, Defrise M, De Mol C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint [J]. Communications on pure and applied mathematics, 2004,57(11):1413-1457.
    [14]Efron B, Hastie T, Johnstone I, et al. Least angle regression [J]. The Annals of statistics, 2004,32(2):407-499.
    [15]Combettes P L, Wajs V R. Signal recovery by proximal forward-backward splitting [J]. Multiscale Modeling & Simulation,2005,4(4):1168-1200.
    [16]Figueiredo M A T, Nowak R D, Wright S J. Gradient projection for sparse reconstruc-tion:Application to compressed sensing and other inverse problems [J]. Selected Topics in Signal Processing, IEEE Journal of,2007,1(4):586-597.
    [17]Starck J L, Donoho D L, Candes E J. Astronomical image representation by the curvelet transform [J]. Astronomy and Astrophysics,2003,398(2):785-800.
    [18]Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems [J]. SIAM Journal on Imaging Sciences,2009,2(1):183-202.
    [19]Cevher V, Hegde C, Duarte M F, et al. Sparse signal recovery using markov random fields [R]. RICE UNIV HOUSTON TX DEPT OF ELECTRICAL AND COMPUTER ENGINEERING,2009.
    [20]Huang J, Zhang T, Metaxas D. Learning with structured sparsity [C] Proceedings of the 26th Annual International Conference on Machine Learning. ACM,2009:417-424.
    [21]Jenatton R, Obozinski G, Bach F. Structured sparse principal component analysis [C] In-ternational Conference on Artificial Intelligence and Statistics (AISTATS).2010.
    [22]Jenatton R, Mairal J, Obozinski G, et al. Proximal methods for sparse hierarchical dictio-nary learning [C]. ICML,2010.
    [23]Jacob L, Obozinski G, Vert J P. Group lasso with overlap and graph lasso [C] Proceedings of the 26th Annual International Conference on Machine Learning. ACM,2009:433-440.
    [24]Kim S, Xing E P. Tree-guided group lasso for multi-task regression with structured spar-sity [J]. In Proc. Intl. Conf. Machine Learning,2010:543-550.
    [25]Mallat S G, Zhang Z. Matching pursuits with time-frequency dictionaries [J]. Signal Pro-cessing, IEEE Transactions on,1993,41(12):3397-3415.
    [26]Natarajan B K. Sparse approximate solutions to linear systems [J]. SIAM journal on com-puting,1995,24(2):227-234.
    [27]Zou H. The adaptive lasso and its oracle properties [J]. Journal of the American statistical association,2006,101(476):1418-1429.
    [28]Meinshausen N, Yu B. Lasso-type recovery of sparse representations for high-dimensional data [J]. The Annals of Statistics,2009:246-270.
    [29]Chartrand R. Exact reconstruction of sparse signals via nonconvex minimization [J]. Sig-nal Processing Letters, IEEE,2007,14(10):707-710.
    [30]Chartrand R, Staneva V. Restricted isometry properties and nonconvex compressive sens-ing [J]. Inverse Problems,2008,24(3):035020.
    [31]Chartrand R. Fast algorithms for nonconvex compressive sensing:MRI reconstruction from very few data [C] Biomedical Imaging:From Nano to Macro,2009. ISBI'09. IEEE International Symposium on. IEEE,2009:262-265.
    [32]Krishnan D, Fergus R. Fast image deconvolution using hyper-Laplacian priors [J]. Ad-vances in Neural Information Processing Systems,2009,22:1-9.
    [33]Xu Z., Zhang H., Wang Y., Chang X. Y. L1/2 regularization [J]. Sci. China,2010,53(6): 1159-1169.
    [34]Xu Z. Data modeling:Visual psychology approach and L1/2 regularization theory, in Proc. Int. Congr. Math.,2010, (4):3151-3184.
    [35]. Xu Z., Guo H., Wang Y.Zhang H. The representation of L1/2 regularizer among Lq regu-larizer:An experimental study based on phase diagram, Acta Autom. Sinica,2011, to be published.
    [36]Xu Z., Chang X., Xu F., et al. L1/2 regularization:A thresholding representation theory and a fast solver [J]. IEEE Transactions on neural networks and learning systems,2012, 23(7):1013-1027.
    [37]Xu C, Peng Z M, Jing W F. Sparse kernel logistic regression based on L1/2 regulariza-tion [J]. Science China Information Sciences,2012:1-16.
    [38]Zeng J, Xu Z, Zhang B, et al. Accelerated L1/2 regularization based SAR imaging via BCR and reduced Newton skills [J]. Signal Processing,2013.
    [39]Dempster A P. Covariance selection [J]. Biometrics,1972:157-175.
    [40]Meinshausen N, Buhlmann P. High-dimensional graphs and variable selection with the lasso [J]. The Annals of Statistics,2006,34(3):1436-1462.
    [41]Banerjee O, El Ghaoui L, d'Aspremont A. Model selection through sparse maximum like-lihood estimation for multivariate gaussian or binary data[J]. The Journal of Machine Learning Research,2008,9:485-516.
    [42]Wainwright M J, Ravikumar P, Lafferty J D. High-Dimensional Graphical Model Selection Using l1-Regularized Logistic Regression [J]. Advances in neural information processing systems,2007,19:1465.
    [43]Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso[J]. Biostatistics,2008,9(3):432-441.
    [44]Yuan M, Lin Y. Model selection and estimation in the Gaussian graphical model [J]. Biometrika,2007,94(1):19-35.
    [45]Candes E J, Recht B. Exact matrix completion via convex optimization [J]. Foundations of Computational mathematics,2009,9(6):717-772.
    [46]Candes E J, Tao T. The power of convex relaxation:Near-optimal matrix completion [J]. Information Theory, IEEE Transactions on,2010,56(5):2053-2080.
    [47]Cai J F, Candes E J, Shen Z. A singular value thresholding algorithm for matrix comple-tion [J]. SIAM Journal on Optimization,2010,20(4):1956-1982.
    [48]Keshavan R H, Montanari A, Oh S. Matrix completion from a few entries [J]. Information Theory, IEEE Transactions on,2010,56(6):2980-2998.
    [49]Recht B, Fazel M, Parrilo P A. Guaranteed minimum-rank solutions of linear matrix equa-tions via nuclear norm minimization [J]. SIAM review,2010,52(3):471-501.
    [50]Xiao Y H, Jin Z F. An alternating direction method for linear-constrained matrix nuclear norm minimization [J]. Numerical Linear Algebra with Applications,2012,19(3):541-554.
    [51]Jolliffe I T. Principal component analysis [M]. New York:Springer-Verlag,1986.
    [52]Fazel M., Goodman J. Approximations for partially coherent optical imaging sys-tems. [R]. Technical report, Stanford University,1998.
    [53]Fazel M, Hindi H, Boyd S P. A rank minimization heuristic with application to minimum order system approximation [C] American Control Conference,2001. Proceedings of the 2001. IEEE,2001,6:4734-4739.
    [54]Fazel M, Hindi H, Boyd S P. Log-det heuristic for matrix rank minimization with applica-tions to Hankel and Euclidean distance matrices [C] American Control Conference,2003. Proceedings of the 2003. IEEE,2003,3:2156-2162.
    [55]Candes E J, Li X, Ma Y, et al. Robust principal component analysis? [J]. J. ACM,2011:58, 1-37.
    [56]Wright J, Ganesh A, Rao S, et al. Robust principal component analysis:Exact recovery of corrupted low-rank matrices via convex optimization [J]. Advances in neural information processing systems,2009,22:2080-2088.
    [57]Lin Z, Ganesh A, Wright J, et al. Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix [J]. Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP),2009,61.
    [58]Lin Z, Chen M, Ma Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices [J]. http://arxiv.org/abs/1009.5055v2,2010.
    [59]Kiwiel K. C., Rosa C.H., Ruszczyeski A. Proximal decomposition via alternating lin-earization [J]. SIAM Journal on Optimization,1999,9:153-172.
    [60]Kiwiel K C. A proximal-projection bundle method for Lagrangian relaxation, including semidefinite programming [J]. SIAM Journal on Optimization,2006,17(4):1015-1034.
    [61]Kiwiel K. C. An alternating linearization bundle method for convex optimization and non-linear multicommodity flow problems, Math. Program., Ser. A.,2011,130(1):59-84.
    [62]Goldfarb D, Ma S, Scheinberg K. Fast alternating linearization methods for minimizing the sum of two convex functions [J]. Mathematical Programming,2010:1-34.
    [63]Scheinberg K, Ma S, Goldfarb D. Sparse inverse covariance selection via alternating lin-earization methods [J]. http://arXiv preprint arXiv:1011.0097,2010.
    [64]Nesterov Y. A method of solving a convex programming problem with convergence rate O(1/k2) [C] Soviet Mathematics Doklady.1983,27(2):372-376.
    [65]Beck A, Teboulle M. Gradient-based algorithms with applications to signal recovery prob-lems [J]. Convex Optimization in Signal Processing and Communications,2010:42-88.
    [66]Bertsekas D P. Incremental proximal methods for large scale convex optimization [J]. Mathematical programming,2011,129(2):163-195.
    [67]Fukushima M, Mine H. A generalized proximal point algorithm for certain non-convex minimization problems [J]. International Journal of Systems Science,1981,12(8):989-1000.
    [68]Mine H, Fukushima M. A minimization method for the sum of a convex function and a continuously differentiable function [J]. Journal of Optimization Theory and Applications, 1981,33(1):9-23.
    [69]Kiwiel K C. A method for minimizing the sum of a convex function and a continuously differentiable function [J]. Journal of optimization theory and applications,1986,48(3): 437-449.
    [70]Sra S. Nonconvex proximal splitting:batch and incremental algorithms [J]. http://arXiv preprint arXiv:1109.0258,2011.
    [71]Sra S. Scalable nonconvex inexact proximal splitting [C] Advances in Neural Information Processing Systems 25.2012:539-547.
    [72]Womersley R S. Local properties of algorithms for minimizing nonsmooth composite functions [J]. Mathematical Programming,1985,32(1):69-89.
    [73]Polyakova L N. On minimizing the sum of a convex function and a concave function [M] Quasidifferential Calculus. Springer Berlin Heidelberg,1986:69-73.
    [74]Tuy H, Tarn B T, Dan N D. Minimizing the sum of a convex function and a specially structured nonconvex function [J]. Optimization,1994,28(3-4):237-248.
    [75]Nesterov Y. Gradient methods for minimizing composite objective function [J].2007. http://www.ucl.be/cps/ucl/doc/core/documents/coredp2007_76.pdf
    [76]Quoc Tran Dinh, Moritz Diehl. Proximal methods for minimizing the sum of a convex function and a composite function [J].2011. http://arxiv.org/abs/1105.0276
    [77]Gabay D. Chapter ix applications of the method of multipliers to variational inequali-ties [J]. Studies in mathematics and its applications,1983,15:299-331.
    [78]Fukushima M. Application of the alternating direction method of multipliers to separable convex programming problems [J]. Computational Optimization and Applications,1992, 1(1):93-111.
    [79]Eckstein J, Bertsekas D P. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators [J]. Mathematical Programming,1992, 55(1-3):293-318.
    [80]Eckstein J, Fukushima M. Some reformulations and applications of the alternating direc-tion method of multipliers [J]. Large Scale Optimization:State of the Art,1994:115-134.
    [81]Chen G, Teboulle M. A proximal-based decomposition method for convex minimization problems [J]. Mathematical Programming,1994,64(1-3):81-101.
    [82]Wang S J, Shahidehpour S M, Kirschen D S, et al. Short-term generation scheduling with transmission and environmental constraints using an augmented Lagrangian relaxation [J]. Power Systems, IEEE Transactions on,1995,10(3):1294-1301.
    [83]Fortin M, Glowinski R. Augmented Lagrangian methods:applications to the numerical solution of boundary-value problems [M]. North Holland,2000.
    [84]Boyd S, Parikh N, Chu E, et al. Distributed optimization and statistical learning via the al-ternating direction method of multipliers [J]. Foundations and Trends in Machine Learn-ing,2011,3(1):1-122.
    [85]Attouch H, Bolte J, Redont P, et al. Proximal alternating minimization and projection methods for nonconvex problems:An approach based on the Kurdyka-?ojasiewicz in-equality [J]. Mathematics of Operations Research,2010,35(2):438-457.
    [86]Moreau J. J. Proximite et dualite dans un espace Hilbertien, Bulletin de la Societe Mathematique de France,93:273-299,1965.
    [87]Martinet B. Regularisation d'inequations variationelles par approximations successives, Revue francaise d'Informatique Et de Recherche operationnelle, serie Rouge 4(3),154-158(1970)
    [88]Rockafellar R. T. Monotone operators and the proximal point algorithm [J]. SIAM. Con-trol Optim,1976,14:877-898.
    [89]Lions P L, Mercier B. Splitting algorithms for the sum of two nonlinear operators [J]. SIAM Journal on Numerical Analysis,1979,16(6):964-979.
    [90]Spingarn J E. Partial inverse of a monotone operator [J]. Applied mathematics and opti-mization,1983,10(1):247-265.
    [91]Spingarn J E. Applications of the method of partial inverses to convex programming:de-composition [J]. Mathematical Programming,1985,32(2):199-223.
    [92]Eckstein J, Bertsekas D P. On the douglas-rachford splitting method and the proximal point algorithm for maximal monotone operators [J]. Mathematical Programming,1992, 55(1-3):293-318.
    [93]Mahey P, Oualibouch S, Tao P D. Proximal decomposition on the graph of a maximal monotone operator [J]. SIAM Journal on Optimization,1995,5(2):454-466.
    [94]Eckstein J, Svaiter B F. A family of projective splitting methods for the sum of two maxi-mal monotone operators [J]. Mathematical Programming,2008,111(1-2):173-199.
    [95]Combettes P L, Pesquet J C. Proximal splitting methods in signal processing [M] Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer New York, 2011:185-212.
    [96]Peaceman D W, Rachford, Jr H H. The numerical solution of parabolic and elliptic differ-ential equations [J]. Journal of the Society for Industrial & Applied Mathematics,1955, 3(1):28-41.
    [97]Douglas J, Rachford H H. On the numerical solution of heat conduction problems in two and three space variables [J]. Transactions of the American mathematical Society,1956, 82(2):421-439.
    [98]Glowinski R., Marrocco A. Sur 1'approximation, par elements finis d'ordre un, et la reso-lution, par penalisation-dualite, d'une classe de problems de Dirichlet non lineares, Revue Frannngaise d'Automatique, Informatique, et Recherche Operationelle,1975,9:41-47.
    [99]Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variational problems via finite element approximation [J]. Computers & Mathematics with Applications,1976, 2(1):17-40.
    [100]Glowinski R, Le Tallec P. Augmented Lagrangian and operator splitting methods in non-linear mechanics [M]. Society for Industrial and Applied Mathematics,1987.
    [101]Tseng P. Further applications of a splitting algorithm to decomposition in variational in-equalities and convex programming [J]. Mathematical Programming,1990,48(1-3):249-263.
    [102]Tseng P. Applications of a splitting algorithm to decomposition in convex programming and variational inequalities [J]. SIAM Journal on Control and Optimization,1991,29(1): 119-138.
    [103]He B S, Yang H, Wang S L. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities [J]. Journal of Optimization Theory and applications,2000,106(2):337-356.
    [104]He B, Yuan X. On the O(1/n) Convergence Rate of the Douglas-Rachford Alternating Direction Method [J]. SIAM Journal on Numerical Analysis,2012,50(2):700-709.
    [105]Clarke F.H. Optimization and Nonsmooth Analysis [M]. New York:Wiley,1983.
    [106]Rockafellar R. T. Convex Analysis [M]. Princeton Jersey:Princeton University Press, 1970.
    [107]Rockafellar R. T., Wets R. J-B. Variational Analysis [M]. Berlin:Springer-Verlag,1998.
    [108]Lemar6chal C. Nondifferentiable optimization [J]. Handbooks in Operations Research and Management Science,1989,1:529-572.
    [109]Hintermuller M. A proximal bundle method based on approximate subgradients [J]. Comput. Optim. Appl.2001,20:245-266.
    [110]Kiwiel K. C. An algorithm for nonsmooth convex minimization with errors [J], Math. Comp.1985,45:173-180.
    [111]Kiwiel K. C. Approximations in proximal bundle methods and decomposition of convex programs [J]. J. Optim. Theory Appl.1995,84:529-548.
    [112]Kiwiel K. C. Aproximal bundlemethod with approximate subgradient linearizations [J]. SI AM J.Optim.2006,16:1007-1023.
    [113]Kiwiel K. C., Lemarechal C. An inexact bundle variant suited to column generation [J]. Math. Program.2009,118:177-206.
    [114]Kiwiel K. C. Bundle methods for convex minimization with partially inexact oracles [R]. Technical report, Systems Research Institute, Warsaw,2009. http://www.optimization-online.org/DB_FILE/2009/03/2257.pdf.
    [115]Solodov M.V. On approximations with finite precision in bundle methods for nonsmooth optimization [J]. J. Opt. Theory and Appl.,2003,119(1):151-165.
    [116]Oliveira W., Sagastizabal C., Scheimberg S. Inexact Bundle Methods for Two-Stage Stochastic Programming [J]. SIAM J. Optim.,2011,21(2):517-544.
    [117]Ackooij W. van, Sagastizabal C. Constrained bundle methods for upper inexact ora-cles with application to joint chance constrained energy problems [J/OL]. Technical re-port,2012. Optimization Online, to appear in Comp. Opt. Appl. http://www.optimization-online.org/DB_HTML/2012/12/3711.html.
    [118]Oliveira W., Sagastizabal C., Lemarechal C. Bundle Methods in depth:a uni-fied analysis for inexact oracles [R]. Technical report,2013. Optimization Online. http://www.optimization-online.org/DB_HTML/2013/02/3792.html
    [119]Hare C., Sagastizabal C., Solodov M.V. Inexactness in bundle methods for lo-cally Lipschitz functions [R]. Technical report, June 2012 (revised January 2013). http://pages.cs.wisc.edu/solodov/hss12iBundle.pdf
    [120]Dempe Stephan. Foundations of Bilevel Programming [M]. Dordrecht:Kluwer Aca-demic Publishers,2002.
    [121]Solodov M. V. A bundle method for a class of bilevel nonsmooth convex minimization problems [J]. SIAM J. Optimization,2007,18:242-259.
    [122]Hintermuller M. A proximal bundle method based on approximate subgradients [J]. Computational Optimization and Applications,2001,20:245-266.
    [123]Bertsekas D. P., Nedic A., Ozdaglar A. E. Convex Analysis and Optimization [M]. Bel-mont, Mass:Athena Scientific,2003.
    [124]Mangasarian O. L. Sufficiency of exact penalty minimization [J]. SIAM Journal on Con-trol and Optimization,1985,23:30-37.
    [125]Kiwiel K. C. Methods of Descent for Nondifferentiable Optimization, Lecture Notes in Mathematics 1133 [M]. Berlin; New York:Springer-Verlag,1985.
    [126]Lemarechal C. and Sagastizabal C. Practical aspects of the Moreau-Yosida regulariza-tion:theoretical preliminaries [J]. SIAM. Journal on Optimization,1997,2:367-385.
    [127]Makela M. M., Neittaanmaki P. Nonsmooth Optimization:Analysis and Algorithms with Applications to Optimal Control [M]. Singapore; New Jersey:World Scientific,1992.
    [128]Combettes P L, Wajs V R. Signal recovery by proximal forward-backward splitting [J]. Multiscale Modeling & Simulation,2005,4(4):1168-1200.
    [129]Bertsekas Dimitri P. Nonlinear Programming [M]. Belmont, Mass:Athena Scientific, 1999.
    [130]Moreau J.J. Proximite et dualite dans un espace Hilbertien [J]. Bulletin de la Soci6te Mathematique de France,1965,93:273-299.
    [131]Yosida K. Functional Analysis [M]. Berlin:Springer-Verlag,1965.
    [132]Martinet B. Regularisation d' inequations variationelles par approximations succes-sives [J]. Revue francaise d' Informatique Et de Recherche operationnelle, serie Rouge, 1970,4(3):154-158.
    [133]Boyd S., Parikh N., Chu E., et al. Distributed optimization and statistical learning via the alternating direction method of multipliers [J]. Found. Trends Mach. Learning,2010,3: 1-122.
    [134]Wen Z., Goldfarb D., Yin W. Alternating direction augmented Lagrangian methods for semidefinite programming, tech. report [R]. Columbia University,2009.
    [135]Bernard F., Thibault L. Prox-regularity of functions and sets in Banach spaces [J]. Set Valued Anal.,2004,12(1-2):25-47.
    [136]Bernard F., Thibault L. Prox-regular functions in Hilbert spaces [J]. J. Math Anal. Appl., 2005,303(1):1-14.
    [137]Poliquin R.A., Rockafellar R.T. Generalized Hessian properties of regularized nons-mooth functions [J]. SIAM J. Optim.,1996,6(4):1121-1137.
    [138]Poliquin R.A., Rockafellar R.T. Prox-regular functions in Variational Analysis [J]. Trans. Am. Math.Soc,1996,348(5):1805-1838.
    [139]Hare W.L., Sagastizbal C. Computing proximal points of nonconvex functions [J]. Math. Program. (Ser. B) 116 1-2, pp,221-258,2009.
    [140]Hare W.L., Sagastizadbal C. A redistributed proximal bundle method for nonconvex op-timization [J]. SIAM Journal on Optimization,2010,20(5):2442-2473.
    [141]Kiwiel K C. A method of linearizations for linearly constrained nonconvex nonsmooth minimization [J]. Mathematical Programming,1986,34(2):175-187.
    [142]Schramm H, Zowe J. A version of the bundle idea for minimizing a nonsmooth function: Conceptual idea, convergence analysis, numerical results [J]. SIAM Journal on Optimiza-tion,1992,2(1):121-152.
    [143]Luksan L., Vlcek J. A bundle-Newton method for nonsmooth unconstrained minimiza-tion [J]. Mathematical Programming,1998,83(1-3):373-391.
    [144]Fuduli A, Gaudioso M, Giallombardo G. Minimizing nonconvex nonsmooth functions via cutting planes and proximity control [J]. SIAM Journal on Optimization,2004,14(3): 743-756.
    [145]Horst R., Thoai N. V. D.C. Programming:Overview [J]. J. Optim. Theory Appl.,1999, 103:1-43.
    [148]Luksan L., Vlcek J. Test problems for nonsmooth unconstrained and linearly constrained optimization [J].Prague:Institute of Computer Science, Academy of Sciences of the Czech Republic. Tech. Rep,2000,798.
    [147]Bagimv A.M. A method for minimization of quasidifferentiable functions [J]. Op-fimizadon Methods and Software,2000,17(1):31-60.
    [148]L.Luksan and, J.Vlcek, Test problems for nonsmooth unconstrained and linearly con-strained optimization. Technical Report 798, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague,2000.
    [149]L.Luksan and, J.Vlcek, A Bundle-Newton Method for Nonsmooth Unconstrained Mini-mization, Math. Prog.83(1998), pp.373-391.
    [150]L.Luksan and, J. Vlcek, Globally Convergent variable metric method for convex nons-mooth unconstrained minimization, J. Optim. Theory Appl.102(1999), pp.593-613.
    [151]L.Luksan and, J. Vlcek, Globally convergent variable metric method for nonconvex non-differentiable unconstrained minimization, J. Optim. Theory Appl.111.2(2001), pp.407-430.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700