视听协同的交互式步态干预训练方法
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家重点研发计划 (2024YFB2808801), 国家自然科学基金(62332003, 82371254), 中国科学院软件研究所重大项目(ISCAS-ZD-202401)


Interactive Gait Training Method with Audiovisual Synergy
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    帕金森病(Parkinson's disease, PD)影响着全球近千万的患者, 尚无根治方法, 但循证医学表明基于感官信息提示的训练可以延缓疾病进展. 然而, 目前的研究大多基于单一模态, 且缺乏对于用户的感知与反馈. 为此, 本文提出了视听协同的多模态步态训练方法, 基于用户步态数据生成并动态调节多模态提示, 进而探究其辅助早期PD康复的可行性. 该方法首先构建了多模态提示生成框架, 通过用户步态数据计算周期和步高参数, 生成视觉与听觉协同的提示; 然后搭建了交互式干预训练系统, 基于用户步态变化动态调整视听提示, 实现了用户感知与多模态提示生成的交互式迭代. 最后, 在临床招募了40名早期PD患者(H&Y ≤2)进行实验, 与对照组相比视听协同组改善效果最优, 与基线状态相比视听协同组在训练中和训练后步态对称性平均提高20.776% (p=0.0001)和21.157% (p=0.0001), 病患侧步速平均提高33.924% (p=0.0001)和36.433% (p<0.0001). 结果同时表明视听协同提示, 能够更快速、更持久的帮助患者改善步态表现. 本文提出的基于步态数据生成多模态提示的训练方法, 为建立量化驱动的精准康复模式提供了新思路, 促进了多模态交互技术在医疗领域的应用与发展.

    Abstract:

    Parkinson's disease (PD) affects nearly 10 million people worldwide, and there is no cure, but evidence-based medicine suggests that training based on sensory cues can slow disease progression. However, most of the current studies are based on a single modality and lack user sensing and feedback. Here we propose an audiovisual multimodal gait training method, which generates and dynamically adjusts multimodal cues based on users' gait data to investigate the feasibility of assisting early-stage PD rehabilitation. The method constructs a multimodal cue generation framework to generate visual and auditory cues by calculating the cycle and step height parameters from user gait data; then an interactive intervention training system is built to dynamically adjust the audiovisual cues based on the user's gait changes, which realizes the interactive iteration between the user's perception and the multimodal cue generation. Finally, 40 patients with early-stage PD (H&Y ≤2) were recruited for the experiment in the clinic. Compared with the control group, the audiovisual synergy group had the best improvement effect. Compared with the baseline, the symmetry of gait in the audiovisual synergy group increased by an average of 20.776% (p=0.0001) during the training and 21.157% (p=0.0001) after the training, and the velocity on the affected side increased by an average of 33.924% (p=0.0001) during the training and 36.433% (p<0.0001) after the training. The results indicate that audiovisual coordinated cues can help patients improve gait performance more quickly and sustainably. The proposed multimodal cueing training method based on gait data provides a new idea for the establishment of a quantitatively-driven accurate rehabilitation model, and promotes the application and development of multimodal interaction technology in the medical field.

    参考文献
    相似文献
    引证文献
引用本文

李信金,王文婕,王凯,脱厚珍,王诗雅,孙伟,谭小慧,田丰.视听协同的交互式步态干预训练方法.软件学报,2026,37(5):

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-05-24
  • 最后修改日期:2025-07-11
  • 录用日期:
  • 在线发布日期: 2025-09-23
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号