近年来随着深度学习尤其是深度强化学习模型的不断增大，其训练成本即超参数的搜索空间也在不断变大，然而传统超参数搜索算法大部分是基于顺序执行训练，往往需要等待数周甚至数月才有可能找到较优的超参数配置. 为解决深度强化学习超参数搜索时间长和难以找到较优超参数配置问题，本文提出一种新的超参数搜索算法——基于种群演化的超参数异步并行搜索(PEHS). 算法结合演化算法思想，利用固定资源预算异步并行搜索种群模型及其超参数，从而提高算法性能.论文设计实现了在Ray 并行分布式框架上运行的参数搜索算法，通过实验表明在并行框架上基于种群演化的超参数异步并行搜索的效果优于传统超参数搜索算法，且性能稳定.
In recent years, the cost of deep learning, especially the deep reinforcement learning model, has been increasing, and the search space for hyperparameter has been increasing. However, most of the traditional hyperparameter search algorithms perform training in sequence, waiting for weeks or even numbers. It is possible to find a better super-parameter configuration in the month. Therefore, in order to solve the problem of long-parameter search time for deep reinforcement learning and not necessarily find a superior hyper-parameter configuration problem, this paper proposes a new hyper-parameter search algorithm- Asynchronous Parallel Hyperparameter Search with Population Evolution (PEHS) . This algorithm combines the idea of Evolutionary algorithm and uses the fixed resource budget to search the population model and its hyperparameter in parallel to improve the performance of the algorithm. The network of deep learning experiments is complex and The runtime is long, so the experiments in this article are run on Ray"s parallel distributed framework. Experiments show that the parametric asynchronous parallel search based on population evolution on the parallel framework is better than the traditional hyperparameter search algorithm and its performance is stable.