Journal of Software:2021.32(2):327-348

(可信分布式计算与服务教育部重点实验室(北京邮电大学), 北京 100876;北京邮电大学 网络空间安全学院, 北京 100876)
Survey on Multimodal Visual Language Representation Learning
DU Peng-Fei,LI Xiao-Yong,GAO Ya-Li
(Key Laboratory of Trustworthy Distributed Computing and Service(Beijing University of Posts and Telecommunications), Ministry of Education, Beijing 100876, China;School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100876, China)
Chart / table
Similar Articles
Article :Browse 1291   Download 1235
Received:May 11, 2020    Revised:June 26, 2020
> 中文摘要: 我们生活在一个由大量不同模态内容构建而成的多媒体世界中,不同模态信息之间具有高度的相关性和互补性,多模态表征学习的主要目的就是挖掘出不同模态之间的共性和特性,产生出可以表示多模态信息的隐含向量.主要介绍了目前应用较广的视觉语言表征的相应研究工作,包括传统的基于相似性模型的研究方法和目前主流的基于语言模型的预训练的方法.目前比较好的思路和解决方案是将视觉特征语义化,然后与文本特征通过一个强大的特征抽取器产生出表征,其中,Transformer作为主要的特征抽取器被应用表征学习的各类任务中.分别从研究背景、不同研究方法的划分、测评方法、未来发展趋势等几个不同角度进行阐述.
Abstract:A multimedia world in which human beings live is built from a large number of different modal contents. The information between different modalities is highly correlated and complementary. The main purpose of multi-modal representation learning is to mine the different modalities. Commonness and characteristics produce implicit vectors that can represent multimodal information. This article mainly introduces the corresponding research work of the currently widely used visual language representation, including traditional research methods based on similarity models and current mainstream pre-training methods based on language models. The current better ideas and solutions are to semanticize visual features and then generate representations with textual features through a powerful feature extractor. Transformer is currently used in various tasks of representation learning as the mainstream network architecture. This article elaborates from several different angles of research background, division of different studies, evaluation methods, future development trends, etc.
文章编号:     中图分类号:    文献标志码:
基金项目:国家自然科学基金(U1836215) 国家自然科学基金(U1836215)
Foundation items:National Natural Science Foundation of China (U1836215)
Reference text:


DU Peng-Fei,LI Xiao-Yong,GAO Ya-Li.Survey on Multimodal Visual Language Representation Learning.Journal of Software,2021,32(2):327-348