作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2023, Vol. 49 ›› Issue (2): 279-287,295. doi: 10.19678/j.issn.1000-3428.0063943

• 开发研究与工程应用 • 上一篇    下一篇

基于多任务强化学习的堆垛机调度与库位推荐

饶东宁, 罗南岳   

  1. 广东工业大学 计算机学院, 广州 510006
  • 收稿日期:2022-02-15 修回日期:2022-04-02 发布日期:2022-07-18
  • 作者简介:饶东宁(1977-),男,副教授、博士,主研方向为智能规划、深度学习;罗南岳(通信作者),硕士研究生。
  • 基金资助:
    广东省自然科学基金面上项目(2021A1515012556)。

Stacker Scheduling and Repository Location Recommendation Based on Multi-Task Reinforcement Learning

RAO Dongning, LUO Nanyue   

  1. School of Computers, Guangdong University of Technology, Guangzhou 510006, China
  • Received:2022-02-15 Revised:2022-04-02 Published:2022-07-18

摘要: 堆垛机调度是物流仓储自动化中的重要任务,任务中的出入库效率、货物存放等情况影响仓储系统的整体效益。传统调度方法在面对较大规模调度问题时,因处理大状态空间从而导致性能受限和收益降低。与此同时,库位优化与调度运行联系密切,但现有多数工作在处理调度问题时未能考虑到库位优化问题。为解决仓储中堆垛机调度问题,提出一种基于深度强化学习算法的近端策略优化调度方法。将调度问题视为序列决策问题,通过智能体与环境的持续交互进行自我学习,以在不断变化的环境中优化调度。针对调度中伴生的库位优化问题,提出一种基于多任务学习的调度、库位推荐联合算法,并基于调度网络构建适用于库位推荐的Actor网络,通过与Critic网络进行交互反馈,促进整体的联动和训练,从而提升整体效益。实验结果表明,与原算法模型相比,该调度方法的累计回报值指标平均提升了33.6%,所提的多任务学习的联合算法能有效地应对堆垛机调度和库位优化的应用场景,可为该类多任务问题提供可行的解决方案。

关键词: 堆垛机调度, 库位优化, 多任务学习, 深度强化学习, 近端策略优化

Abstract: Stacker scheduling is an essential task in warehousing automation.Inbound-outbound efficiency and storage situations affect overall efficiency.When handling large-scale problems, traditional scheduling methods cannot achieve performance because processing large state spaces is challenging.Meanwhile, optimization of repository location is closely related to scheduling operation, but most existing works ignore it when addressing scheduling problems.To solve the scheduling problem, this study proposes a method based on the deep reinforcement learning algorithm Proximal Policy Optimization(PPO).The method considers the warehousing scheduling a sequence decision-making problem.It conducts self-learning through continuous interaction between agent and environment, thereby optimizing the scheduling in a changing environment.A novel algorithm based on multitask learning network is proposed to address the optimization problem of repository location with scheduling tasks.Based on the scheduling network, the algorithm constructs an actor network of repository recommendations.The actor network participates in training through interactive feedback with the critic network, thereby promoting the overall benefit.The experimental results affirm the efficacy of the proposed scheduling method, as evidenced by its average increase of 33.6% in the index of cumulative reward in comparison to the original algorithm model.The proposed multitask learning network can effectively handle the scenarios of stacker scheduling and repository location optimization, thus providing a feasible solution for this type of multitask problem.

Key words: stacker scheduling, location optimization, multi-task learning, deep reinforcement learning, Proximal Policy Optimization(PPO)

中图分类号: