Engineering Research Center of Internet of Things Technology Applications Ministry of Education
National Natural Science Foundation of China (61573167, 61572237)
针对灰狼优化算法(Grey Wolf Optimization, GWO)存在收敛精度不高、易陷入局部最优的不足,提出一种基于教与学的混合灰狼优化算法(Hybrid Grey Wolf Optimization, HGWO).首先,采用佳点集理论进行种群初始化,提高初始种群的遍历性;其次,提出一种非线性控制参数策略,在迭代前期增加全局搜索能力,避免算法陷入局部最优,在迭代后期增加局部开发能力,提高收敛精度;最后,结合教与学算法(Teaching-learning-based Optimization, TLBO)和粒子群优化算法,修改原位置更新公式以优化算法搜索方式,从而提升算法的收敛性能.为验证HGWO算法的有效性,本文选取9种标准测试函数,将HGWO算法与GWO算法、其他群体智能优化算法和其他改进GWO算法进行仿真实验.实验结果表明,本文提出的HGWO算法性能优于GWO算法和其他群体智能优化算法,且在改进算法中具有一定的优势.
In terms of the problems that the gray wolf optimization algorithm has low convergence accuracy and is easy to fall into local solutions, this paper proposes a hybrid gray wolf optimization algorithm based on the teaching-learning optimization. Firstly, the good-point set theory is used to generate the initial population to improve its ergodicity. Then, a nonlinear control parameter strategy is proposed to increase the global search capability in the early stage of the iteration to avoid the algorithm from falling into the local optimum, and increase the local development capability in the later stage of the iteration to improve the convergence accuracy. Finally, combining with teaching-learning-based optimization(TLBO) algorithm and particle swarm optimization(PSO), the original position update formula is modified to optimize the search mode of the algorithm, thereby improving the convergence performance of the algorithm. In order to verify the effectiveness of HGWO algorithm, this paper compares HGWO algorithm with the classical GWO algorithm, other swarm intelligence optimization algorithms and other improved GWO algorithms by using nine well-known benchmark test functions. The results show that the performance of the proposed HGWO algorithm is significantly better than the classical GWO algorithm and other swarm intelligence optimization algorithms, and has certain advantages in the improved algorithms.