BIVM: 类脑计算编译框架及其原型研究
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP18

基金项目:

国家自然科学基金面上项目(62072266); 国家自然科学基金原创探索计划 (62250006); 国家自然科学基金青年科学基金 (62202254); 北京信息科学与技术国家研究中心青年创新基金(BNR2022RC01003)


BIVM: Compilation Framework for Brain-inspired Computing and Prototype Research
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    各类新型架构的类脑计算芯片正不断涌现, 类脑神经网络训练/学习算法和高效的生物神经网络仿真也是研究热点. 但如何在架构迥异的类脑计算芯片上优化运行计算/访存特征不同的类脑应用是关键难点, 也是建立类脑计算良好生态环境的重点, 而通用计算领域的繁荣生态已经表明, 一个灵活、可扩展、可复用的编译框架是解决这一问题的有效途径. 为此提出BIVM, 一个类脑计算编译框架及其验证原型. BIVM基于领域定制化体系结构(domain specific architecture, DSA)的多层中间表示(multi-level intermediate representation, MLIR)框架, 设计了为类脑神经网络定制的多层 IR, 包括脉冲神经网络方言(高层 IR)、由 MLIR 内置方言为主组成的中间层 IR 和各类芯片的底层 IR. 针对不同类脑芯片的体系结构跨度很大且其提供的硬件功能粒度不一等问题, BIVM充分利用MLIR的progressivity特性, 所设计的IR能够混合不同的抽象层次和概念(比如混合细粒度指令与某些后端的以交叉开关结构为运算主体的粗粒度运算), 从而能够复用软件模块、简化开发; 在此基础上, 在多层IR的递降转换中灵活组合不同级别的编译优化方法, 包括被广泛采纳的SNN特定优化技术(如计算稀疏性挖掘与时空并行度挖掘)和适配目标硬件的底层优化技术, 以实现不同后端上的高性能. 目前, BIVM原型支持的后端有通用处理器(控制流架构)、具有控制流/数据流混合架构的脉冲神经网络加速芯片(FPGA), 以及基于ReRAM (resistive random-access memory, 阻变存储器)的数据流架构类脑芯片(软件仿真), 能够将智能应用与生物神经网络仿真应用优化编译为适配不同架构芯片的执行程序. 随后, 进行编译技术适配性分析与性能比较, 结果表明该类框架在编译高生产力、高可移植性、高性能方面具有良好潜力.

    Abstract:

    Brain-inspired computing chips of various architectures are emerging, and the inference/training/learning algorithms of spiking neural network (SNN) and the efficient simulation of biological neural networks have become research hotspots. Meanwhile, efficiently executing applications with different computation/memory-access characteristics on various chips remains a significant challenge, which is crucial for establishing arobust brain-inspired computing ecosystem. The success of the general-purpose computing ecosystem indicates that a flexible, scalable, and reusable compiler infrastructure is an effective solution to this problem. This study proposes BIVM, a compilation framework for brain-inspired computing, along with its proof-of-concept implementation. Based on the multi-level intermediate representation (MLIR) framework of domain specific architecture (DSA), multi-layer IRs customized for SNNs are designed, including an SNN dialect, middle-layer IRs composed mainlyof MLIR’s inherent dialects, and the underlying IRs for various target chips. To address challenges such as the large architectural differences and varying granularity of hardware primitives in brain-inspired chips, BIVM leverages MLIR’s progressivity feature. This allows for the mixing of different abstraction levels and concepts (e.g. combining fine-grained instructions with coarse-grained computation based on the crossbar structure specific to certain back-ends), enabling software module reuse and reducing compiler development costs, ultimately leading to high productivity. In addition, the framework provides flexibility to combine various levels of compilation optimizations, including widely-used SNN-specific optimizations (e.g. exploring computing sparsity and improving parallelism) and low-level optimizations tailored to different back-ends, ensuring performance portability. The current BIVM prototype supports back-ends such as general-purpose processors (control-flow architecture), SNN accelerator chips (FPGAs) with a hybrid control-/data-flow architecture, and data-flow chip designs based on ReRAM (resistive random-access memory, a widely-used neuromorphic device). It can optimize and compile deep SNN and biological neural network simulation applications into executables tailored for these chips. Comprehensive testing and performance comparisons demonstrate the potential of this compilation framework in achieving high productivity, portability, and performance.

    参考文献
    相似文献
    引证文献
引用本文

杨乐,刘晓义,李广力,渠鹏,崔慧敏,张悠慧. BIVM: 类脑计算编译框架及其原型研究.软件学报,2025,36(10):4768-4791

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-05-22
  • 最后修改日期:2023-08-01
  • 录用日期:
  • 在线发布日期: 2025-07-23
  • 出版日期: 2025-10-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号