软件供应链安全中的大语言模型生成代码逻辑性缺陷检测
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP311

基金项目:

国家自然科学基金(62172210, 62172211)


Logical Defect Detection for Large Language Model Synthesized Code in Software Supply Chain Security
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着大语言模型(large language model, LLM)在代码生成领域的快速发展,其生成的代码在智能化基础软件供应链中的应用日益广泛.基础软件供应链中集成了大量基于LLM生成代码开发的第三方模块与组件.然而,由于LLM主要基于开源代码进行训练,训练代码中的缺陷与安全漏洞可能会导致生成代码存在潜在错误与供应链安全问题.为此,学术界有针对性地提出了EvalPlus等测试技术,但这些技术主要依赖基于概率的测试用例生成机制,难以实现对供应链关键路径的全面覆盖,导致深层次逻辑性缺陷难以被有效发现.为解决上述问题,本文提出一种融合符号执行的供应链LLM生成代码的缺陷检测方法,该方法通过符号执行挂载机制自动识别LLM生成代码的输入参数并进行适配和符号挂载,制导符号执行引擎对程序的关键路径进行精确的约束分析,生成高效的边界测试用例,从而发现现有方法难以检测到的深层逻辑性程序缺陷.本文在现有主流基准数据集上,对LMSYS Chatbot Arena中排名前11的主流LLM进行了实验评估.实验结果表明,本文方法能够更有效地检测出LLM生成代码中的逻辑性缺陷,使代码的平均测试通过率降低了3.99%至18.98%,平均测试覆盖率提高了3.31%至8.19%,有效提升了LLM生成代码的正确性和智能化基础软件供应链的安全性.

    Abstract:

    With the rapid advancement of large language models (LLMs) in code synthesis, their generated code is increasingly being applied in intelligent foundational software supply chains. These supply chains integrate a large number of third-party modules and components developed using LLM-generated code. However, since LLMs are primarily trained on open-source code, defects and security vulnerabilities within the training data may cause potential errors in the generated code and security risks within the software supply chain. To address these issues, academic research has proposed targeted testing techniques such as EvalPlus. However, these approaches face challenges in covering critical paths in the supply chain due to their reliance on probability-based case generation, which makes it difficult to uncover deep-seated logical software defects. To overcome these limitations, we propose a defect detection method for LLM-generated code in software supply chains that integrates symbolic execution. The method employs a symbolic execution mounting mechanism to automatically identify input parameters in LLM-generated code and perform adaptation and symbolic binding. It then guides the symbolic execution engine to conduct precise constraint analysis on the program’s critical execution paths and generate efficient boundary test cases. These test cases can expose deep logical software defects that traditional methods often fail to detect. We evaluated our approach on 11 high-ranking LLMs from the LMSYS Chatbot Arena, using benchmarks from prominent defect detection frameworks and engines. The results show that our method is more effective at detecting logical defects in LLM-generated code, achieving an average pass rate reduction of 3.99% to 18.98% and coverage enhancement of 3.31% to 8.19% over existing research. These findings highlight the method’s effectiveness in improving the correctness of LLM-generated code and enhancing the security of intelligent foundational software supply chains.

    参考文献
    相似文献
    引证文献
引用本文

赵祖威,汤恩义,李薛成,戴新宇,陈鑫,李宣东.软件供应链安全中的大语言模型生成代码逻辑性缺陷检测.软件学报,2026,37(7):

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-09-08
  • 最后修改日期:2025-10-20
  • 录用日期:
  • 在线发布日期: 2025-12-26
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号