用户名: 密码: 验证码:
分布估计算法研究及在动态优化问题中的应用
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
分布估计算法(EDAs)通过将统计学习理论与进化算法结合,形成一种全新的进化模式,是进化计算领域的研究热点。分布估计算法从提出到现在虽然取得了一定的进展,但是还有很多问题需要深入研究,例如理论分析、算法设计、应用研究等。本文以分布估计算法为基础,根据算法的类型和特点,主要从算法性能的改进以及在动态优化问题中的应用两个方面进行了研究,主要创新成果如下:
     1.研究了分布估计算法的收敛性,首先通过在期望分布基础上引入一个误差量,建立有限群体分布估计算法模型;然后在三种不同的常用选择策略下证明了EDAs的收敛性。结果显示在有限群体模型下,在本文所述误差范围内分布估计算法具有全局收敛性。
     2.针对多变量相关的分布估计算法(Bayesian优化算法:BOA),从三个方面对算法进行了改进研究。首先,针对BOA计算量大的问题,提出了结合局部结构学习的Bayesian优化算法并分析了算法的复杂度;其次,讨论了一般优化问题先验知识的挖掘和利用方式,把BOA中前一代种群所提供的信息作为先验知识结合到当前代Bayesian网络的学习中,提高了所学习网络的可靠性,从而提高算法的性能;最后,讨论了BOA的多样性,设计了一个种群多样性函数,通过此函数引入变异算子,以保持种群多样性,避免算法陷入局部最优。以上算法均通过仿真实验表明了算法的有效性。
     3.提出一种改进的基于群体的增量学习分布估计算法(PBIL算法),利用改进的算法求解了一类特殊的动态优化问题。首先从动态环境随时间变化的特点进行分析,归纳出一类何时变化满足一定统计分布的动态优化问题。然后针对这类问题提出自适应PBIL算法,根据何时变化这个随机变量的概率自适应的调整当前代群体的概率模型,增加种群多样性,快速适应环境的变化,最后对设计的算法进行了比较性仿真验证。
     4.针对动态离散优化问题,采用多群体的思想,提出了一种多群体单变量边缘分布算法(MUMDA),利用多个概率模型(对应多个群体)将搜索空间分成几个部分,通过对不同区域的搜索或者探索并对好解进行迁移,扩大搜索空间,增加种群多样性,跟踪最优解的变化,并证明了所提算法的收敛性。比较分析表明,所提算法能快速跟踪最优解。
     5.针对动态单目标优化问题,提出一种自组织策略,利用当前环境的局部信息和最优解的历史信息,自适应的增加种群多样性,将自组织策略与单变量边缘分布算法(UMDA)结合,提出一种新的自组织单变量边缘分布算法(SOUMDA),利用动态sphere函数对所提算法进行了测试。
     6.针对动态多模优化问题,提出一种新的多群体及扩散单变量边缘分布算法(MDUMDA),多群体方法用来并行地寻找多个最优解,扩散模型用来有指导的增加种群多样性,使得前一环境的最优解的邻域集逐渐远离这一最优解并扩大搜索空间,使算法快速适应环境的变化。利用动态优化问题标准测试例子MPB对所提算法进行测试,仿真结果表明了算法的有效性。
     7.针对动态多目标优化问题,提出了一种基于预测模型的正则分布估计算法(PREDA)。在算法设计中,首先利用Pareto最优解集的多个类中心与参考点描述Pareto最优解集,设计了一种动态多目标问题中历史数据的存储方式;其次通过惯性预测与高斯变异产生预测点集,并将预测点集结合到当前群体中,使得算法在环境变化后有指导的增加种群多样性,提高算法适应环境的能力,快速跟踪最优解,通过标准动态测试问题对所提算法进行了仿真实验,与相关算法进行比较分析结果表明本文设计的算法能快速适应环境的变化,跟踪Pareto最优解。
Estimation of distribution algorithm (EDA) is a new class of evolutionary algorithms, which combines the statistic theory with evolutionary schemes. In recent years, EDA attracts more and more attentions. Although EDA has got some progresses since it was proposed, there are many unsolved problems, such as theory analysis, algorithm designs and applications, etc. Based on characteristics and types of EDA, we mainly focus our study on the performance improvement of EDA and the applications of EDA to dynamic optimization problems. The main contributions of this dissertation are as follows:
     1. Study the convergence of estimation of distribution algorithm. Firstly, a model of EDAs with finite population is built up by incorporating an error into expected distribution of parent population. Secondly the convergence of EDAs is proved with finite population under three widely used selection schemes.
     2. Bayesian optimization algorithm (BOA) is improved from three aspects. Firstly, a Bayesian optimization algorithm incorporated with local structure learning is proposed to decrease the computation, and the complexity of the proposed algorithm is analyzed. Secondly, according to the characteristics of BOA, it is discussed on how to discovery and use prior knowledge in general optimization problems. Concretely, the information discovered in the previous generation is used as prior knowledge and is incorporated into the Bayesian networks learning. As a result, the reliability of the networks and the performance of the proposed algorithm are improved. Finally, we discuss the population diversity of BOA and design a diversity function. By using this function, the mutation operator is incorporated into BOA. The objective is to maintain the diversity of population and avoid the algorithm traping in the local optima. Simulations indicate that the above three algorithms are all effective.
     3. We design a class of special dynamic optimization problem and propose an algorithm to solve it. Firstly, according to the characteristic of changing environment, a class of special dynamic problem in which the change time obeys some distribution is addressed. Secondly, for this type of problems, an adaptive PBIL (Population-based incremental learning) algorithm is presented, where the probability of the random variable is used to adaptively adjust the probability model of the current population, to increase population diversity and to promptly react to the environment changing. The experimental results show that adaptive PBIL can track the optimal solution more reliably and accurately compared with PBIL.
     4. For dynamic discrete optimization problems, a multi-population univariate marginal distribution algorithm (MUMDA) is proposed. The main idea of the algorithm is to divide the search space into several parts by using several probability modals corresponding to several populations. Meanwhile the algorithm explores and exploits in different regions and can make the best solutions migrate. The experimental results show that the MUMDA is effective and can react to the dynamic environments promptly.
     5. For dynamic simple-objective optimization problems, a self-organization scheme is proposed. This approach combines the current information and the historic information of optimal solutions to increase the population diversity. Incorporated by the scheme, a new self-organization univariate marginal distribution algorithm (SOUMDA) is proposed. The experimental study on dynamic sphere function showed that the SOUMDA is effective and can adapt to the dynamic environments rapidly.
     6. For dynamic multimodal optimization problems, a new Multi-population and Diffusion UMDA (MDUMDA) is presented. The multi-population approach is used to locate multiple local optima for dynamic multimodal problems. The diffusion model is used to guide increasing the diversity, which makes the neighbor individuals of previous optimal solutions move gradually and thus can enlarge the search space. The experimental studies on the multimodal dynamic moving peaks benchmark are made to evaluate the proposed algorithm.
     7. For dynamic multi-objective optimization problems, a regularity modal-based estimation of distribution algorithm with prediction is proposed. In algorithm design, we use the central points of Pareto optimal set and reference solutions of Pareto optimal set to describe Pareto optimal solutions. So it can be used to storage the history of the Pareto optimal solutions of dynamic multi-objective optimization problems. Then by using the inertia prediction and Gauss mutation the prediction set of central points and reference solutions is generated. After an environment changed, the predict set is incorporated in the current population to guide increasing the population diversity. Finally, the experiments on dynamic multiobjective optimization problems were carried out to evaluate the performance of our proposed algorithm.
引文
[1] Bagley J D, The behavior of adaptive system which employ genetic and correlation algorithm, Doctoral dissertation University of Michigam, 1967
    [2] Holland J H, Adaptation in Nature and Artificial Systems, MIT press, 1992
    [3] De Jong K A, An analysis of the behavior of a class of genetic adaptive systems, Ph.D Dissertation, University of Michigan, No.76-9381,1975
    [4] Goldberg D E, Genetic algorithms in search, optimization and machine learning, Addison-wesley, 1989
    [5]周明,孙树栋,遗传算法原理及应用,北京:国防工业出版社,2002
    [6]李敏强,寇纪松等,遗传算法的基本理论与应用,北京:科学出版社, 2002
    [7]王正志,薄涛,进化计算,长沙:国防科技大学出版社,2000
    [8] Larra?aga P, Lozano J.A, Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation. Boston: Kluwer Academic Publishers, 2002
    [9]周树德,孙增圻,分布估计算法综述,自动化学报,2007,33(2):113-121
    [10] Baluja S, Population-Based Incremental Learning: A method for Integrating Genetic Search Based Function Optimization and Competitive Learning. CMU-CS-94-163, Available via. Anonymous ftp at: reports.adm.cs.cmu.edu,1994 Technical Report, Carnegie Mellon University (1994)
    [11] Sebag M, Ducoulombier A, Extending population-based incremental learning to continuous search spaces. In Parallel Problem solving from Nature-PPSN V. Springer-Verlag. Berlin. 1999: 418-427
    [12] Müehlenbein H, Paass C, From recombination of genes to the estimation of distributions I. Binary parameters. Parallel Problem Solving from Nature-PPSN IV, Berlin,1996.178-187
    [13] Müehlenbein H. The equation for response to seletion and its use for prediction. Evolutionary Computation, 1997, 5(3):303-346
    [14] Larra?aga P.R, Etxeberria J.A, Lozano, and Pena J.M, Optimization by Learning and Simulation of Bayesian and Gaussian Networks. Technical Report KZZA-IK-4-99, Department of Computer Science and Artificial Intelligence, University of the Basque Country, 1999:2254-2265
    [15] Harik G R, Lobel F G, Goldberg D E, The compacy genetic algorithm. Proceedings of the IEEE Conference on Evolutionary Computation. IEEE, Indianapolis, USA, 1998:523-528
    [16] De Bonet. J S, Isbell C L, Voila P, MIMIC: Finding optima by estimation probability densities. Advances in Neural Information Processing Systems, Cambridge: MIT Press,1997.(9):424-430
    [17] Baluja S, Davies S, Using Optimal dependency-trees for combinatorial optimization: Learningthe structure of the search space. Proceedings of the 14th International Conference on Maching Learning. San Francisco, CA: Morgan Kaufmann, 1997:30-38
    [18] Pelikan M, Müehlenbein H, The bivariate marginal distribution algorithm, Advances in Soft Computing一Engineering Design and Manufacturing, 1999:521-535
    [19] Müehlenbein H and Mahnig T , The Factorized distribution algorithm for additively decomposed functions. Congress on Evolutionary Computation, Piscataway, IEEE, 1999
    [20] Pelikan M, Goldberg D.E, and CantuPaz E, Linkage problem, distribution estimation and Bayesian networks. Evolutionary Computation, 2000, 8(3):311-340
    [21] Pelikan M, Goldberg D E and CantuPaz E, BOA: The Bayesian optimization algorithm. Proceedings of the Genetic and Evolutionary Computation Conference, Orlando, Florida, USA. 1999:525-532.
    [22] Pelikan M, Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms. New York: Springer-Verlag, 2005
    [23] Larra?aga P, Etxeberria R, Lozano J A, Pena J M, Combinatorial optimization by learning and simulation of Bayesian networks. Proceedings of the Sixteenth Conference on Uncertainty in Artifcial Intelligence. Stanford, 2000:343-352
    [24] Ji?íO? ená?ek, Parallel Estimation of Distribution Algorithms. PhD thesis, Brno University of Technology, 2002.
    [25] Bosman P.A.N and Thierens D, Expanding from discrete to continuous estimation of distribution algorithms: The IDEA, Lecture Notes in Computer Science 1917: Parallel Problem Solving from Nature-PPSN VI, 2000:767-776
    [26] Larra?aga P, Etxeberria, Optimization in continuous domains by learning and simulation of Gaussian networks. Proceedings of the Genetic and Evolutionary Computation Conference Workshop Program, 2000:201-204
    [27] Harik G. Linkage Learning via probabilistic modeling in the ECGA.University of Illinois at Urbana-Champaign, IlliGAL Rep: 99010,1999
    [28] Larra?aga P, Etxeberria R, Lozano J A, Pena J M. Optimization in continuous domains by learning and simulation of Gaussian networks. Proceedings of the Genetic and Evolutionary Computation Conference Workshop Program. Las Vegas, Nevada, 2000: 201-204
    [29] Larra?aga P, Lozano J A, Bengoetxea E. Estimation of Distribution Algorithms Based on Multivariate Normal and Gaussian Networks. Technical Report KZZA-IK-1-01, Department of Computer Science and Artificial Intelligence, University of the Basque Country, 2001
    [30] Zhang Q and Müehlenbein H, On the convergence of a class of Estimation of distribution algorithms, IEEE Transactions on evolutionary computation. 2004, 8(2):127-136
    [31] Rastegar R, Meybodi M R, A study on the global convergence time complexity of estimation of distribution algorithms. Lecture Notes in Computer Science, 2005, Vol.3641:441-450
    [32] H?hfeld M, Rudolph G, Towards a theory of population based incremental learning.Proceedings of the 4th International Conference on Evolutionary Computation. IEEE, 1997
    [33] Cristina G, Lozano J A, Larra?aga P, Analyzing the PBIL algorithm by means of discrete dynamical systems. Complex Systems, 2001, 12(4), 465-479
    [34] Müehlenbein H, Mahnig T, Convergence theory and applications of the factorized distribution algorithm. Journal of Computing and Information Technology, 1999, 7(1): 19-32
    [35] Zhang Q. On stability of fixed points of limit models of univariate marginal distribution algorithm and factorized distribution algorithm. IEEE Transactions on Evolutionary Computation, 2004, 8(1): 80-93
    [36] Gao Y, Culberson J. Space complexity of estimation of distribution algorithms. Evolutionary Computation, 2005, 13(1): 125-143
    [37] Pelikan M, Bayesian Optimization Algorithm: from Single Level to Hierarchy. USA, Ann Arbor, MI 48106-1346, University of Illinois at Urbana-Champaign, 2002
    [38] Pelikan M, Sastry K, Goldberg D E. Scalability of the Bayesian optimization algorithm. International Journal of Approximate Reasoning, 2002, 31(3): 221-258
    [39] Müehlenbein H, HAons R, The estimation of distributions and the minimum relative entropy principle. Evolutionary Computation, 2005, 13(1): 1-27
    [40] Roberto S, Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation, 2005, 13(1): 67-97
    [41] Ji?íO, Entropy-based convergence measurement in discrete estimation of distribution algorithms, Lozanoet al. (Eds): Towards a New Evolutionary Computation: Advances in the Estimation of Distribution Algorithms. Springs-Verlag, 2002. 125-142
    [42] Zhang Q, Zhou A, Jin Y, RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm, IEEE transactions on evolutionary computation, 2008, 12(1):41-63
    [43] Nazan K, Goldberg D E, Pelikan M, Multi-Objective Bayesian Optimization Algorithm. IlliGAL Report, University of Illinois at Urbana-Champaign, Urbana, Illinois, 2002
    [44] Pelikan M, Sastry K, Goldberg D E. Multiobjective hBOA, clustering, and scalability, Proceedings of Conference on Genetic and Evolutionary Computation. New York, USA: ACM Press, 2005. 663-670
    [45] Li H, Zhang Q, Tsang E P K, Ford J. Hybrid estimation of distribution algorithm for multiobjective Knapsack problem, Proceedings of 4th European Conference on Evolutionary Computation in Combinatorial Optimization. Coimbra, Portugal, 2004: 145-154
    [46] Laumanns M, Ocenasek J, Bayesian optimization algorithms for multi-objective optimization. Proceedings of the 7th International Conference on Parallel Problem Solving from Nature. London, UK: Springer-Verlag, 2002: 298-307
    [47] IInza P, Larra?aga P, Etxeberria B S. Feature subset selection by Bayesian network-based optimization. Artificial Intelligence, 2000, 123(1-2): 157-184
    [48] Thithi I, Control system parameter identification using the population based incremental learning, International conference on control, IEE, 1996:1309-1314
    [49] Santarelli S, Yu T, Goldberg D E, Altshuler E, O' Donnell T, Southall H. Military antenna design using simple and competent genetic algorithms, Mathematical and Computer Modelling, 2006, 43(9-10): 990-1022
    [50] Simionescu P A, Beale D G, Dozier G V, Teeth-number synthesis of a multispeed planetary transmission using an estimation of distribution algorithm, Journal of Mechanical Design, 2006, 128(1): 108-115
    [51] Saeys Y, Degroeve S, Aeyels D, van de Peer Y, Rouze P, Fast feature selection using a simple estimation of distribution algorithm: a case study on splice site prediction. Bioinformatics, 2003, 19(s2): 179-188
    [52] Yang X, Birkfellner W, Niederer P. Optimized 2D/3D medical image registration using the estimation of multivariate normal algorithm (EMNA). Proceedings of Biomedical Engineering, Innsbruck, Austria, 2005: 163-168
    [53] Joaquin R, Roberto S. Improving the discovery component of classier systems by the application of estimation of distribution algorithms, Proceedings of the Students Sessions, ACAI'99. Chania, Greece, 1999. 43-44
    [54] Cantu-Paz E. Pruning neural networks with distribution estimation algorithms, Proceedings of the Genetic and Evolutionary Computation Conference GECCO 2003. Berlin: Springer-Verlag, 2003. 790-800
    [55] Aickelin U, Li J, An estimation of distribution algorithm for nurse scheduling, Annals of Operations Research, 2007, 155(1):289-309
    [56] Zhang Q, Sun J, Tsang E P K, Ford J, Combination of guided local search and estimation of distribution algorithm for solving quadratic assignment problem. Proceedings of the Brid of Feather Workshops, Genetic and Evolutionary Computation Conference. Berlin: Springer-Verlag, 2004:42-48
    [57]周雅兰,王甲海,印鉴,一种基于分布估计的离散粒子群优化算法,电子学报,2008,36(6),1242-1249
    [58] Zhong Xiaoping, Ding Ji feng, Li Weijia, Zhang Yong,Robust Airfoil Optimization with Multi-objective Estimation of Distribution Algorithm,Chinese Journal of Aeronautics, 2008 (21) 289-295
    [59]李斌钟润添肖金超庄镇泉一种基于边缘分布估计的多目标优化算法,电子与信息学报2007 29(11):2683-2688
    [60]姜群,王越,基于最大熵的分布估计算法,微电子学与计算机,2007, 24(11):73-77
    [61]杨晔宏,李伟生,李翠霞,一种基于混合因子分析的分布估计算法,信息与控制,2006,35(4):448-453
    [62]赵中煜,彭宇,彭喜元,基于分布估计算法的组合电路测试生成,电子学报,2006, 12A:2384-2387
    [63]钟伟才,刘静,刘芳,焦李成,建立在一般结构网络上的分布估计算法,电子与信息学报,2005,27(3):467-471
    [64]熊盛武,刘麟,汪洋,基于贝叶斯网络的并行分布估计算法研究,武汉理工大学学报,2005,27(2):19-23
    [65]丁才昌,方勃,鲁小平,分布估计算法及其性能研究,武汉大学学报,2005,51(S2):125-129
    [66]钟伟才,刘静,刘芳,焦李成,二阶卡尔曼滤波分布估计算法,计算机学报,2004,27(9):1272-1278
    [67]彭星光,高晓光,魏小丰,基于贝叶斯优化算法的UCAV编队对联合目标的协同攻击研究,系统仿真学报,2008,20(10):2693-2697
    [68]钟小平,李为吉,唐伟, Pareto强度值实数编码多目标贝叶斯优化算法,西北工业大学学报,2007,25(3):321-327
    [69]张庆彬,吴惕华,刘波,克隆选择单变量边缘分布算法,浙江大学学报,2007,41(10):1715-1719
    [70]胡琨元,崔建江,郑秉霖,基于信息熵的自适应PBIL算法及其应用,系统仿真学报,2003,15(8)
    [71]符小卫,高晓光,基于贝叶斯优化的无人机路径规划算法,宇航学报,2006,27(3):421-425
    [72] Farina M, Deb K, and Amato P, Dynamic Multiobjective Optimization Problems: Test Cases, Approximations, and Applications, IEEE transactions on evolutionary computation, 2004, 8(5):425-442
    [73] Vazquez M, Whitley D.L, A comparison of genetic algorithms for the dynamic job shop scheduling problem, Proceedings of GECCO, 2000:251-266
    [74] Madureira A, Ramos C, Silva S.C, A genetic approach for dynamic job-shop scheduling problems, 4th Metaheuristics international conference, MIC, 2001, 41-45
    [75] Bendtsen C, Krink T, Phone routing using the dynamic memory model, Evolutionary Computation CEC’02 Proceedings of the Congress on IEEE, 2002:992-997
    [76] Hocaoglu C, Sanderson A.C, Planning multi-paths using speciation in genetic algorithms, Proceedings of IEEE International Conference on Evolutionary Computation, 1996:378-383
    [77] Bendtsen C, Krink T, Dynamic memory model for non-stationary optimization, Evolutionary Computation CEC’02 Proceedings of the Congress on IEEE, 2002:352-366
    [78] Fogel L J, Owens A J, Walsh M J, Artificial intelligence through simulated evolution. New York: John Wiley, 1966
    [79] Goldberg D E, Smith R E. Nonstationary function optimization using genetic algorithms with dominance and diploidy, Proc of the 2nd Int Conf on Genetic Algorithms. Lawrence Erlbaum Associates, 1987:59-68
    [80] Jin Y, Branke J, Evolutionary optimization in uncertain environments-a survey, IEEE transactions on evolutionary computations, 2005, 9(3):1-15
    [81] Branke J, Evolutionary approaches to dynamic optimization problems-updated survey, Proc of GECCO Workshop on Evolutionary Algorithms for Dynamic Optimization, 2001:365-369
    [82]王洪峰,汪定伟,杨圣祥,动态环境中的进化算法,控制与决策[J],2007,22(2):127-131
    [83]刘淳安,王宇平,动态多目标优化的进化算法及其收敛性分析,电子学报,2007,35(6):1118-1121
    [84]刘淳安,王宇平,基于新模型的动态多目标优化进化算法,计算机研究与发展,2008,45(4):603-611
    [85]曹宏庆,康立山,陈毓屏.动态系统的演化建模,计算机研究与发展,1999,36(8):923-931.
    [86]单世民,邓贵仕,动态环境下一种改进的自适应微粒群算法,系统工程理论与实践,2006,3:39-44.
    [87]窦全胜,周春光,徐中宇等,动态优化环境下的群核进化粒子群优化方法,计算机研究与发展,2006,43(1):89-95.
    [88]吴漫川,李元香,郑波尽,解决非静态优化问题的MEAP算法,计算机工程与科学,2005,27(8):73-75.
    [89]罗印升,李人厚,张维玺,基于免疫机理的动态函数优化算法,西安交通大学学报,2005,39(4):384-387.
    [90]尚荣华,马文萍,焦李成,公茂果,免疫遗忘动态多目标优化,哈尔并工程大学学报,2006,27:205-209.
    [91]陈善龙,张著洪,基于免疫机制的动态约束多目标优化免疫算法,贵州大学学报,2008,25(3):262-267
    [92]钱淑渠,张著洪,动态多目标免疫优化算法及性能测试研究,智能系统学报,2007,2(5):68-77
    [93] Ghosh A, Müehlenbein H, Univariate marginal distribution algorithms for non-stationary optimization problems, International journal of knowledge-based and intelligent engineering systems, 2004,8:129-138
    [94] Yang S, Yao X, Experimental study on population-based incremental learning algorithms for dynamic optimization problems, Soft computing, 2005, 9:815-834
    [95] Yang S, Memory-Enhanced Univariate Marginal Distribution Algorithms for Dynamic Optimization Problems, IEEE, 2005: 2560-2567
    [96] Yang S, Population-Based Incremental Learning with Memory Scheme for Changing Environments, GECCO’05, June 25–29, 2005, Washington, DC, USA. 2005:711-718
    [97] Yuan B, Orlowska M, Sadiq S, Extending a class of continuous estimation of distribution algorithms to dynamic problems, Optimization Letters, 2008, 2:433-443
    [98] Milo? Kobliha, Josef Schwarz, and Ji?íO? ená?ek, Bayesian Optimization Algorithms forDynamic Problems,EvoWorkshops 2006, LNCS 3907, Springer-Verlag Berlin Heidelberg, 2006:800-804
    [99]阎平凡,张长水,人工神经网络与模拟进化计算,北京:清华大学出版社,2005
    [100] Pelikan M, Goldberg D E, Lobo F. A Survey of Optimization by Building and Using Probabilistic Models. IlliGAL Report No. 99018, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, Illinois, 1999
    [101] Baluja S, An Empirical Comparison of Seven Iterative and Evolutionary Function Optimizaion Heuristics. Technical Report CMU-CS-95-193, Computer Science Department, Carnegie Mellon University, 1995
    [102] Baluja S, Caruana R, Removing the genetics from standard genetic algorithm. In: Proceedings of the International Conference on Machine Learning. San Mateo, USA: Morgan Kaufmann, 1995. 38-46
    [103] Müehlenbein H, Mahnig T, FDA a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 1999, 7(4): 353-376
    [104] Pelikan M, Müehlenbein H, Marginal distributions in evolutionary algorithms. In: Proceedings of the International Conference on Genetic Algorithms. Brno, Czech Republic: Technical University of Brno, Publisher, 1998. 90-95
    [105] Heckerman D, Geiger D, and Chickering D.M, Learning Bayesian Networks: The combination of knowledge and statistical data. Machine Learning, 1995, 20(3): 197-243.
    [106] González C, Lozano J.A, Larra?aga P, The convergence behavior of the PBIL algorithm: a preliminary approach, Proceedings of ICANNGA'01, Prague, Czech Republic, 2001:228-231
    [107] Yang S, Construction dynamic test environments for genetic algorithms based on problem difficulty, IEEE,2004:1262-1269
    [108] Yang S, Non-stationary problem optimization using the primaldual genetic algorithm. Proc. of the Congrcss on Evolutionary Computation, 2003, 4:2246-2253.
    [109] Carlisle A, Dozier G, Tracking changing extrema with adaptive particle swarm optimizer, Auburn University, Rep:CSSE01208, 2001
    [110] Angeline P. J, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences, Proc17th Annual Conf. Evolutionary Programming1 New York: Springer-Verlag , 1998:601-610
    [111] Morrison R. W and DeJong K. A, A test problem generator for non-stationary environments. In Congress on Evolutionary Computation, IEEE, 1999, 3:2047-2053
    [112] Branke J, Memory enhanced evolutionary algorithms for changing optimization problems, Proc. of the Congress on Evolutionary Computation, IEEE, 1999, 3:1875-1882
    [113] Branke J, Salihoglu E, Uyar S, Towards an analysis of dynamic environments, GECCO’05, ACM, Washington DC, 2005:1433-1440
    [114] Reeves C, Karatza H, Dynamic sequencing of a multi-processor system: A genetic algorithm approach, Proc of lst Int Conf on Artificial Neural Net s and Genetic Algorithms. San Francisco : Morgan Kaufmann Publishers, 1993:491-495.
    [115] Bierwirth C , Kopfer H, Mattfeld D C, Genetic algorithm based scheduling in a dynamic manufacturing environment, Proc. of IEEE Conf. on Evolutionary Computation. Piscataway: IEEE, 1995:439-443
    [116] Bierwirth C, Mattfeld D C, Production scheduling and rescheduling with genetic Algorithms, Evolutionary Computation, 1999, 7 (1) : 1-18
    [117] Lin S C, Goodman E D, Punch W F, A genetic algorithm approach to dynamic job shop scheduling problems, Proc of the 7th Int Conf on Genetic Algorithm. San Francisco: Morgan Kaufmann Publishers, 1997: 481-488
    [118] Pico C.A.G, Wainwright R.L, Dynamic scheduling of computer tasks using genetic algorithms, Proc of the 1st IEEE Conf on Evolutionary Computation. Piscataway: IEEE Service Center, 1994: 829-833.
    [119] Krishnakumar K, Micro-genetic algorithms for stationary and non-stationary function optimization. Intelligent Control and Adaptive Systems Proc of the SPIE. Philadelphia, 1989: 289-296
    [120] Cobb H G, An investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environment. Washington: Naval Research Laboratory, 1990
    [121] Vavak F, Fogarty T C, J ukes K, Adaptive combustion balancing in multiple burner boiler using a genetic algorithm with variable range of local search. Proc of the 7th Int Conf on Genetic Algorithm. San Francisco: Morgan Kaufmann Publishers, 1997: 719-726.
    [122] Gerratt S M, Walker J H, Genetic algorithms: Combining evolutionary and non-evolutionary methods in tracking dynamic global optima. Proc of the Genetic and Evolutionary Computation Conf. San Francisco: Morgan Kaufmann Publishers, 2002: 359-366.
    [123] Simoes A, Costa E, Using GAs to deal with dynamic environment s: A comparative study of several approaches based on promoting diversity. Proc of the Genetic and Evolutionary Computation Conf. San Francisco: Morgan Kaufmann Publishers, 2002: 698-710.
    [124] Tinos R, Carvalho A, A genetic algorithm with gene dependent mutation probability for non2stationary optimization problems, Proc of Congress on Evolutionary Computing. Piscataway: IEEE Service Center, 2004: 1278-1285
    [125] Grefenstette J J, Genetic algorithms for changing environments, Parallel Problem Solving from Nature. Brussels, 1992: 137-144.
    [126] Eriksson R, Olsson B, On the performance of evolutionary algorithms with lifetime adaptation in dynamic fitness landscapes. Proc of Congress on Evolutionary Computing. Piscataway: IEEE Service Center, 2004: 1293-1300.
    [127] Mori N, Kita H, Nishikawa Y, Adaptation to a changing environment by means of thethermodynamical genetic algorithm. Parallel Problem Solving from Nature. Berlin: Springer Publishers, 1996: 513-522.
    [128] Andersen H C, An investigation into genetic algorithms, and the relationship between speciation and the t racking of optima in dynamic functions. Brisbane: Queensland University of Technology, 1991
    [129] Hocaoglu C, Sanderson A C, Planning multi-paths using speciation in genetic algorithms. Proc of the 3rd IEEE Int Conf on Evolutionary Computation. Piscataway: IEEE Service Center, 1996: 378-383.
    [130] Collard P, Escazut C, Gaspar A, An evolutionary approach for time dependant optimization, Int J on Artificial Intelligence Tols, 1997, 6 (4) : 665-695
    [131] Yang S, The primal-dual genetic algorithm. Proc of the 3rd Int Conf on Hybrid Intelligent System. IOS Press, 2003
    [132] Ng K.P, Wong K.C, A new diploid scheme and dominance change mechanism for non2stationary function optimization, Proc of 6th Int Conf on Genetic Algorithms. San Francisco: Morgan Kaufmann Publishers, 1995: 159-166
    [133] Uyar A, Harmanci A. Preserving diversity in changing environment s through diploidy with adaptive dominance, Proc of the Genetic and Evolutionary Computation Conf. San Francisco: Morgan Kaufmann Publishers, 2002
    [134] Hadad B S, Eick C F, Supporting polyploidy in genetic algorithms using dominance vectors, Proc of the 6th Int Conf on Evolutionary Programming. San Francisco: Morgan Kaufmann Publishers, 1997: 223-234
    [135] Ryan C, Diploidy without dominance, Proc of 3rd Nordic Workshop on Genetic Algorithms. 1997: 63-70
    [136] Ryan C, Collins J J, Polygenic inheritance—A haploid scheme that can outperform diploidy, Proc of the 5th Int Conf on Paraller Problem Solving from Berlin: Springer Publisher, 1997: 178-187
    [137] Louis S J, Xu Z, Genetic algorithms for open shop scheduling and re-scheduling, ISCA 11th Int Conf on Computers and Their Applications. Piscataway: IEEE Service Center, 1996: 99-102
    [138] Trojanowski K, Michalewicz Z, Xiao J, Adding memory to the evolutionary planner/ navigator, Congress on Evolutionary Computaton. Piscataway: IEEE Service Cente, 1997: 483-487
    [139] Bendt sen C N, Krink T, Dynamic memory model for non-stationary optimization, Congress on Evolutionary Computatoin. Piscataway: IEEE Service Center, 2002: 1452150
    [140] Bendt sen C N, Krink T, Phone routing using the dynamic memory model, Congress on Evolutionary Computation. Piscataway: IEEE Service Center, 2002: 992-997
    [141] Branke J, Kaubler T, Schmidt C, et al, A multi-population approach to dynamic optimization problems, Adaptive Computing in Design and Manufacturing, Berlin: Springer-Verlag, 2000:299-308.
    [142] Ursem R K, Multinational GA optimization technique in dynamic environments, Proceeding of Genetic and Evolutionary Computation, San Francisco: Morgan Kaufmann Publisher, 2000: 19-26
    [143] Blackwell T, Branke J, Multiswarms, exclusion, and anti-convergence in dynamic environments, IEEE Transactions on Evolutionary Computation, 2006,10(4):459-472
    [144] Oh S K, Lee C Y, Lee J J, A new distributed evolutionary algorithm for optimization in nonstationary environments, Congress on Evolutionary Computation. Piscataway: IEEE Service Center, 2002: 1875-1882
    [145] Bingul Z, Adaptive genetic algorithms applied to dynamic multiobjective problems, Applied Soft Computing, 2007, 7:791–799
    [146] Deb K, Udaya B.R.N, Karthik S, Dynamic Multi-objective Optimization and Decision-Making Using Modified NSGA-II: A Case Study on Hydro-thermal PowerScheduling, Lecture Notes in Computer Science, Evolutionary Multi-Criterion Optimization, springer, 4th International Conference, Matsushima, Japan, 2007: 803-817
    [147] Hatzakis I, Wallace D, Dynamic Multi-Objective Optimization with Evolutionary Algorithms: A Forward-Looking Approach, Genetic And Evolutionary Computation Conference, Seattle, Washington, USA , 2006:1201-1208,
    [148] Amato P, Farina M, An ALife-Inspired Evolutionary Algorithm for Dynamic Multiobjective Optimization Problems, Soft Computing, 2005, 1:113–125
    [149] Zhang Z, Multiobjective optimization immune algorithm in dynamic environments and its application to greenhouse control, Applied Soft Computing, 2008, 8:959–971
    [150] Wang YuPing, Dang Ch, An evolutionary algorithm for dynamic multiobjective optimization, Applied mathematics and computation, ,2008, 205(1):6-18
    [151] Harik, G.R, Lobo F.G, Goldberg D.E, The Compact Genetic Algorithm. IEEE Transactions on Evolutionary Computation, 1999, 3(4):287-297
    [152] Baluja S, Davis S, Fast Probabilistic Modeling for Combinatorial Optimization. The 15th National Conference on Artificial Intelligence, Madison, Wisconsin, AAAI Press, 1998: 469-476
    [153] Pelikan M, Goldberg D.E, and Cantu-Paz E, BOA: The Bayesian optimization algorithm, Proc. Genetic Evolutionary Computation Conf, 1999:525–532
    [154] Zhang B.T, A Bayesian framework for evolutionary computation, Congr. Evolutionary Computation, Washington, DC, 1999, 1: 722–228
    [155] H?hfeld M, Rudolph G, Toward a theory of population-based incremental learning, 4th IEEE Conf. Evolutionary Computation, Indianapolis, IN, 1997:1–5
    [156] Etxeberria R, Larra?aga P. Global optimization using Bayesian networks, The 2nd Symposium on Artificial Intelligence, Habana, Cuba, 1999:332-339
    [157] Pelikan M, Goldberg D.E, Tsutsui S, Hierarchical Bayesian Optimization Algorithm: Toward a New Generation of Evolutionary Algorithms, SICE Annual Conference in Fukui, Fukui University, Japan, 2003:2738-2743
    [158] Pelikan M, Sastry K, Fitness inheritance in the Bayesian Optimization Algorithm, Urbana Illinois: IlliGAL Report No. 2004009, 2004
    [159] Munetomo M,Murao N,Akama K,Empirical Studies on Parallel Network Construction of Bayesian Optimization Algorithms,IEEE, 2005:1524-1531
    [160]梁瑞鑫,张长水,郭国营,柴爱红,一种混沌贝叶斯优化算法,计算机工程与应用,2004,36:95-97
    [161]羌磊,肖田元,分布协同Bayesian优化方法求解调度问题,清华大学学报(自然科学版),2005,45(10):1328-1331
    [162]陈海霞,苑森淼,姜凯,基于保留策略的Bayesian网优化算法,计算机工程与应用,2005,61-63
    [163]姚金涛,林亚平,孔宇彦,陈治平,童调生,基于决策图贝叶斯的多目标QoS组播路由算法,系统仿真学报2005, 17(2):457-460
    [164] Ahn C.W, Ramakrishna1 R.S, Goldberg D.E, Real-Coded Bayesian Optimization Algorithm: Bringing the Strength of BOA into the Continuous World,GECCO 2004, LNCS 3102, Springer-Verlag Berlin Heidelberg, 2004:840–851
    [165] Ocenásek J, Kern S, Hansen N, Koumoutsakos P,A Mixed Bayesian Optimization Algorithm with Variance Adaptation,PPSN VIII, LNCS 3242, Springer-Verlag Berlin Heidelberg, 2004:352–361
    [166] Li J, Aickelin U, The Application of Bayesian Optimizationand Classifier Systems in Nurse Scheduling,PPSN VIII, LNCS 3242, Springer-Verlag Berlin Heidelberg, 2004:581–590
    [167] Schwarz J, and Ocenásek J, The knowledge-based evolutionary algorithm KBOA for hypergraph bisectioning, Proceedings of the Fourth Joint Conference on Knowledge based Software Engineering. Brno, Czech Republic: IOS Press, 2000:51-58
    [168] Müehlenbein H, Mahnig T, and Rodriguez A O, Schemata, distributions, and graphical models in evolutionary optimization, Journal of Heuristics, 1999, 28 (5):215–247
    [169] Holland J H. Adaptation in natural and artificial systems, Ann Arbor, MI: University of Michigan Press, 1975
    [170] Pelikan M. Bayesian Optimization Algorithm: from Single Level to Hierarchy, USA, Ann Arbor, MI 48106-1346, University of Illinois at Urbana-Champaign, 2002
    [171]林亚平,杨小林,快速概率分析进化算法及其性能研究,电子学报,2001, 29(3):178-181.
    [172] Müehlenbein H, Mahnig T, and Rodriguez A.O, Schemata, distributions, and graphical models in evolutionary optimization, Journal of Heuristics, 1999, 30(5):215–247
    [173] Fyfe C, Structured population-based incremental learning, Soft Computing, 1999,2:191-198
    [174] Morrison R W, Jong de K A, Triggered hypermutation revisited, Congress on Evolutionary Computation. Piscataway: IEEE Service Center, 2000:1025-1032
    [175] Grefenstette J J, Genetic algorithms for changing environments, Parallel Problem Solving from Nature, Brussels, 1992: 137-144.
    [176] Oppacher F, Wineberg M. The shifting balance genetic algorithm: Improving the GA in a dynamic environment, Proc of Genetic and Evolutionary Computation Conf. San Francisco: Morgan Kaufmann Publisher, 1999: 504-510.
    [177] Whitley L.D, Fundamental principles of deception in genetic search, Rawlins GJE (ed) Foundations of genetic algorithms, 1991, 1(3):221–241
    [178] Zinchenko L, Müehlenbein H, Kureichik V, Mahnig T, Application of the univariate marginal distribution algorithm to analog circuit design, Proceedings of NASA/DoD Conference on Evolvable Hardware, 2002: 93-101
    [179] Cartwright H.M, Tuson A.L, Genetic algorithms and flowshop scheduling: Towards the development of a real-time process control system, Proceeding of the AISB Workshop on Evolutionary Computing. San Francisco :Morgan Kaufmann Publishers , 1994 : 277-290
    [180] González C, Lozano J.A, Larra?aga P, Mathematical Modelling of UMDAc Algorithm with Tournament Selection. Behaviour on Linear and Quadratic Functions, International Journal of Approximate Reasoning, 2002, 31(3):313– 340
    [181] Mühlenbein H, Zinchenko L, Kureichik V, Mahnig T, Effective Mutation Rate for Probabilistic Models in Evolutionary Analog Circuit Design, Proceedings of the IEEE International Conference on Artificial Intelligence Systems, 2002:401- 406
    [182] Tang M, Raymond Y.K.L, A Hybrid Estimation of Distribution Algorithm for the Minimal Switching Graph Problem, Proceedings of the International Conference on computational Intelligence for Modeling, Control and Automation, and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, 2005:708– 713
    [183] Yuan B, Gallagher M, On the importance of diversity maintenance in Estimation of distribution algorithms, Genetic And Evolutionary Computation Conference, Proceedings of the conference on Genetic and evolutionary computation Washington DC, USA, 2005:719 - 726
    [184] Oppacher F, Wineberg M. The shifting balance genetic algorithm: Improving the GA in a dynamic environment, Proceeding of Genetic and Evolutionary Computation, San Francisco: Morgan Kaufmann Publisher, 1999: 504-510
    [185] Pan G, Dou Q and Liu X, Performance of two Improved Particle Swarm Optimization In Dynamic Optimization Environments, Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications, 2006:2075-2085
    [186] Blackwell T, Branke J, Multiswarms, exclusion, and anti-convergence in dynamic environments, IEEE Transactions on Evolutionary Computation, 2006,10(4):459-472
    [187] Branke J, The moving peaks benchmark website (online), Available: http://www.aifb.uni-karlsruhe.de/~jbr/movpeaks
    [188] Schütze O, Mostaghim S, Dellnitz M, and Teich J, Covering Pareto sets by multilevel evolutionary subdivision techniques, 2nd International Conference on Evolutionary Multi-Criterion Optimization, Faro, Portugal, LNCS:2632, 2003:118–132
    [189] Kambhatla N, Leen T.K, Dimension reduction by local principal component analysis, Neural Comput, 1997, 9(7):1493–1516
    [190] Leung Y, Wang Y, U-Measure: A Quality Measure for Multiobjective Programming, IEEE transactions on systems, man, and cybernetics–part A: systems and humans, 2003, 33(3):337-343
    [191] Zitzler E, Deb K, Thiele L, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE transactions on evolutionary computation, 1999, 3:257-271

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700