融合预训练语言模型的成语完形填空算法
作者:
作者单位:

作者简介:

模式识别与人工智能

通讯作者:

孙界平,sunjieping@scu.edu.cn

中图分类号:

TP18

基金项目:

国家自然科学基金(61972270);四川省新一代人工智能重大专项(2018GZDZX0039);四川省重点研发项目(2019YFG0521)


Idiom Cloze Algorithm Integrating with Pre-trained Language Model
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    根据上下文语境选择恰当的成语,是自然语言处理领域的重要任务之一.现有的研究将成语完型填空任务看成是文本匹配问题,虽然预训练语言模型能够在文本匹配研究上取得较高的准确率,但也存在明显的缺陷:一方面,预训练语言模型作为特征提取器时,会丢失句子间相互信息;另一方面,预训练语言模型作为文本匹配器时,计算开销大,训练时间和推理时间较长.另外,上下文与候选成语之间的匹配是不对称的,会影响预训练语言模型发挥匹配器的效果.为了解决上述两个问题,利用参数共享的思想,提出了TALBERT-blank.TALBERT-blank是将成语选择从基于上下文的不对称匹配过程转换为填空与候选答案之间的对称匹配过程,将预训练语言模型同时作为特征提取器和文本匹配器,并对句向量作潜在语义匹配.这样可以减少参数量和内存的消耗,在保持准确度的情况下,提高了训练和推理速度,达到了轻量高效的效果.在CHID数据集上的实验结果表明:作为匹配器,TALBERT-blank相较于ALBERT,在保证准确率的情况下,更大限度地精简了模型的结构,计算时间进一步缩短54.35%.

    Abstract:

    One of the crucial tasks in the field of natural language processing (NLP) is identifying suitable idioms due to context. The available research considers the Chinese idiom cloze task as a textual similarity task. Although the current pre-trained language model plays an important role in textual similarity, it also has apparent defects. When pre-trained language model is used as a feature extractor, it ignores the mutual information between sentences; while as a text matcher, it requires high computational cost and long running time. In addition, the matching between context and candidate idioms is asymmetric, which influences the effect of the pre-trained language model as a text matcher. In order to solve the above two problems, this study is motivated by the idea of parameter sharing and proposes a TALBERT-blank network. Idiom selection is transformed from a context-based asymmetric matching process into a blank-based symmetric matching process by TALBERT-blank. The pre-trained language model acts as both a feature extractor and a text matcher, and the sentence vector is utilized for latent semantic matches. This greatly reduces the number of parameters and the consumption of memory, improves the speed of train and inference while maintaining accuracy, and produces a lightweight and efficient effect. The experimental results of this model on CHID data set prove that compared with ALBERT text matcher, the calculation time is further shortened by 54.35 percent for the compression model with a greater extent under the condition of maintaining accuracy.

    参考文献
    相似文献
    引证文献
引用本文

琚生根,黄方怡,孙界平.融合预训练语言模型的成语完形填空算法.软件学报,2022,33(10):3793-3805

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2020-09-26
  • 最后修改日期:2020-12-08
  • 录用日期:
  • 在线发布日期: 2022-05-24
  • 出版日期: 2022-10-06
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号