合作-竞争混合型多智能体系统由受控的目标智能体和不受控的外部智能体组成. 目标智能体之间互相合作, 同外部智能体展开竞争, 应对环境和外部智能体的动态变化, 最终完成指定的任务. 针对如何训练目标智能体使他们获得完成任务的最优策略的问题, 现有工作从两个方面展开: (1)仅关注目标智能体间的合作, 将外部智能体视为环境的一部分, 利用多智能体强化学习来训练目标智能体. 这种方法难以应对外部智能体策略未知或者动态改变的情况; (2)仅关注目标智能体和外部智能体间的竞争, 将竞争建模为双人博弈, 采用自博弈的方法训练目标智能体. 这种方法主要针对单个目标智能体和单个外部智能体的情况, 难以扩展到由多个目标智能体和多个外部智能体组成的系统中. 结合这两类研究, 提出一种基于虚拟遗憾优势的自博弈方法. 具体地, 首先以虚拟遗憾最小化和虚拟多智能体策略梯度为基础, 设计虚拟遗憾优势策略梯度方法, 使目标智能体能更准确地更新策略; 然后, 引入模仿学习, 以外部智能体的历史决策轨迹作为示教数据, 模仿外部智能体的策略, 显式地建模外部智能体的行为, 来应对自博弈过程中外部智能体策略的动态变化; 最后, 以虚拟遗憾优势策略梯度和外部智能体行为建模为基础, 设计一种自博弈训练方法, 该方法能够在外部智能体策略未知或者动态变化的情况下, 为多个目标智能体训练出最优的联合策略. 以协同电磁对抗为研究案例, 设计具有合作-竞争混合特征的3个典型任务. 实验结果表明, 同其他方法相比, 所提方法在自博弈效果方面有至少78%的提升.
The mixed cooperative-competitive multi-agent system consists of controlled target agents and uncontrolled external agents. The target agents cooperate with each other and compete with external agents, so as to deal with the dynamic changes in the environment and the external agents and complete tasks. In order to train the target agents and make them learn the optimal policy for completing the tasks, the existing work proposes two kinds of solutions: (1) focusing on the cooperation between target agents, viewing the external agents as a part of the environment, and leveraging the multi-agent-reinforcement learning to train the target agents; but these approaches cannot handle the uncertainty of or dynamic changes in the external agents’ policy; (2) focusing on the competition between target agents and external agents, modeling the competition as two-player games, and using a self-play approach to train the target agents; these approaches are only suitable for cases where there is one target agent and external agent, and they are difficult to be extended to a system consisting of multiple target agents and external agents. This study combines the two kinds of solutions and proposes a counterfactual regret advantage-based self-play approach. Specifically, first, based on the counterfactual regret minimization and counterfactual multi-agent policy gradient, the study designs a counterfactual regret advantage-based policy gradient approach for making the target agent update the policy more accurately. Second, in order to deal with the dynamic changes in the external agents’ policy during the self-play process, the study leverages imitation learning, which takes the external agents’ historical decision-making trajectories as training data and imitates the external agents’ policy, so as to explicitly model the external agents’ behaviors. Third, based on the counterfactual regret advantage-based policy gradient and the modeling of external agents’ behaviors, this study designs a self-play training approach. This approach can obtain the optimal joint policy for training multiple target agents when the external agents’ policy is uncertain or dynamically changing. The study also conducts a set of experiments on the cooperative electromagnetic countermeasure, including three typical mixed cooperative-competitive tasks. The experimental results demonstrate that compared with other approaches, the proposed approach has an improvement of at least 78% in the self-game effect.