Poster #86 - Hua (Tiffany) Lin
- vitod24
- Oct 20
- 1 min read
Enhanced EEG-Enabled Auditory Attention Detection Based on Multi-Modal Contrastive Learning with Graph Structure
Hua Lin - Student Researcher, Roslyn High School, Roslyn, NY, USA Xianzhang Zeng, Ph.D. - Research Mentor, South China University of Technology
As life expectancy increases, hearing impairments have become an increasing health concern that affects millions of people. Conventional hearing aids face limitations in isolating target sounds in noisy environments, significantly affecting the daily communication experience for people with hearing disabilities. This study introduces a novel graph-based electroencephalogram (EEG) multimodal contrastive learning network that uses EEG signals to detect auditory attention focus. I used data preprocessing to remove the noise of EEG signals and transform them into mel-spectrograms. I then integrated two advanced artificial intelligence architectures: Graph Convolutional Networks (GCN) for analyzing spatial relationships between EEG electrodes, and Residual Networks (ResNet) for extracting features from mel-spectrograms. The model used contrastive learning to maximize the similarity between EEG signals and their corresponding mel-spectrograms while minimizing the similarity between non-corresponding pairs under self-supervised learning, thus enhancing the model's capability to discriminate features across different data modalities while maintaining computational efficiency and robust performance. Evaluations across multiple public datasets demonstrated that my model achieved F1-score and accuracy metrics of 94.2% and 94.3% respectively, substantially outperforming existing methodologies. This study validates the effectiveness of graph-based multimodal contrastive learning in enhancing auditory attention detection performance, offering promising directions for assistive hearing technologies and potentially improving communication capabilities for individuals with hearing impairments in various environments.

