SUPR
Novel Approaches for EEG-based Auditory Attention Decoding
Dnr:

NAISS 2024/22-283

Type:

NAISS Small Compute

Principal Investigator:

Bo Bernhardsson

Affiliation:

Lunds universitet

Start Date:

2024-03-01

End Date:

2025-03-01

Primary Classification:

10206: Computer Engineering

Allocation

Abstract

This application has the goal of advancing research in the utilization of EEG signals for brain-computer interfaces, with a focus on applications such as hearing aids. This area forms the core of my research, undertaken in collaboration with my PhD students (2024). In the next phase of the project, the computational resources will support PhD research activities and facilitate collaboration with Martin Skoglund and Emina Alickovic at Linköping University, utilizing data from Eriksholm/Oticon—a forefront research center in auditory technology. Additionally, Master of Science thesis students will contribute by developing and training advanced machine learning models, including transformers, deep network architectures, and diffusion models for EEG signal data augmentation, specifically in relation to auditory data. The goal of these theses is to enhance neuro-steered hearing devices and to explore the optimal training and optimization techniques for these diverse machine learning models in this context. This effort is in partnership with Eriksholm/Oticon, leveraging their leading-edge research capabilities. This project aims to extend the boundaries of auditory attention decoding by integrating it with innovative methods from image processing and large language models, building on our previous achievements in this interdisciplinary field. References [1] J. A. O’sullivan, A. J. Power, N. Mesgarani, S. Rajaram, J. J. Foxe, B. G. Shinn-Cunningham, M. Slaney, S. A. Shamma, and E. C. Lalor, “Attentional selection in a cocktail party environmen can be decoded from single-trial eeg,” Cerebral cortex, vol. 25, no. 7, pp. 1697–1706, 2015. [2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020. [3] Y. Duan, J. Zhou, Z. Wang, Y.-C. Chang, Y.-K. Wang, and C.-T. Lin, “Domain-specific denoising diffusion probabilistic models for brain dynamics,” arXiv preprint arXiv:2305.04200, 2023. [4] Emina Alickovic et al. “A Tutorial on Auditory Attention Identification Methods”. In: Front. Neurosci. 13 (Mar. 2019). [5] Mike Thornton, Danilo Mandic, and Tobias Reichenbach. “Robust decoding of the speech envelope from EEG recordings through deep neural networks”. In: Journal of Neural Engineering 19.4 (July 2022), p. 046007. doi: 10.1088/1741-2552/ac7976. url: https://dx.doi.org/10.1088/1741- 2552/ac7976. [6] Simon Geirnaert et al. “Electroencephalography-Based Auditory Attention Decoding: Toward Neurosteered Hearing Devices”. In: IEEE Signal Processing Magazine 38.4 (2021), pp. 89–102. DOI: 10.1109/MSP.2021.3075932.