作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• •    

视觉分散自适应补偿的SSVEP多域协同解码研究

  • 发布日期:2025-04-07

Multi-Domain Collaborative Decoding of SSVEP with Adaptive Visual Distraction Compensation

  • Published:2025-04-07

摘要: 基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI),由于个体差异和非目标刺激干扰问题面临分类性能瓶颈,且现有方法尚未深入探究视觉分散干扰与个体差异之间的量化关系。为此,本研究提出一种视觉分散自适应补偿的SSVEP多域协同解码算法,其核心包含视觉分散自适应标签平滑技术与多域联合解码模型两部分。首先,基于视觉拥挤效应理论,构建信号幅值与标签噪声的自适应量化模型,通过动态调节标签平滑强度实现个体视觉分散程度的量化表征。在缓解模型过拟合的同时,削弱非目标刺激的干扰影响,抑制个体差异的不利影响。进一步提出一种多域联合解码模型,先通过特征提取框架实现时频空域深度协同,再引入双向长短期记忆网络(Bi-LSTM)对时序全局依赖关系建模,形成兼具局部感受野与长程上下文感知能力的复合特征表达。在3个公开SSVEP数据集上以两个不同长度的时间窗(0.5s和1.0s)分别进行验证。实验表明,在所有实验设置下,该算法的平均识别准确率和平均信息传输率都更具优势。消融实验表明,视觉分散自适应补偿机制,在各实验设置下均能发挥作用,尤其是短时间窗口下,最高能提升18个百分点。表明该方法为神经解码中的个体适应性优化与时频特征融合提供了新的方法论框架。

Abstract: Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) face classification performance bottlenecks due to individual variability and non-target stimulus interference. However, existing methodologies have not yet thoroughly investigated the quantitative correlation between visual distraction interference and inter-individual variability. To address this, we propose a multi-domain collaborative decoding algorithm with adaptive visual distraction compensation, comprising two core components: an adaptive label smoothing technique for visual distraction mitigation and a multi-domain joint decoding model. An adaptive quantification model based on visual crowding theory links signal amplitude to label noise. By dynamically adjusting label smoothing intensity, it quantifies individual distraction levels while reducing overfitting, non-target interference, and inter-subject variability. A multi-domain joint decoding model is proposed, establishing deep temporal-frequency-spatial synergy via hierarchical feature extraction and employing Bi-LSTM for global temporal dependencies. This hybrid architecture yields composite features with local receptive fields and long-range contextual awareness. Extensive experiments on three public SSVEP datasets under two time windows (0.5s and 1.0s) demonstrate the algorithm’s superiority. Results show consistent improvements in average classification accuracy and information transfer rate (ITR) across all settings. Ablation studies confirm the efficacy of the adaptive visual distraction compensation mechanism, particularly under short time windows (0.5s), where accuracy improves by up to 18 percentage points. This work provides a novel methodological framework for individualized adaptation and spatio-spectral-temporal feature fusion in neural decoding.