Abstract:Recent research on multi-turn dialogue generation has focused on RNN or Transformer-based encoder-decoder architecture. However, most of these models ignore the influence of dialogue structure on dialogue generation. To solve this problem, this paper proposes to use graph neural network structure to model the dialogue structure information, thus effectively describing the complex logic within a dialogue. We propose text-based similarity relation structure, turn-switching-based relation structure, and speaker-based relation structure for dialogue generation, and employ graph neural network to realize information transmission and iteration in dialogue context. Extensive experiments on the DailyDialog dataset show that the proposed model consistently outperforms other baseline models in many indexes, which indicates that our proposed model with graph neural network can effectively describe various correlation structures in dialogue, thus contributing to the high-quality dialogue response generation.