logo
Product categories

EbookNice.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link.  https://ebooknice.com/page/post?id=faq


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookNice Team

Seeing helps hearing: A multi-modal dataset and a mamba-based dual branch parallel network for auditory attention decoding by Cunhang Fan instant download

  • SKU: EBN-239238344
Zoomable Image
$ 32 $ 40 (-20%)

Status:

Available

0.0

0 reviews
Instant download (eBook) Seeing helps hearing: A multi-modal dataset and a mamba-based dual branch parallel network for auditory attention decoding after payment.
Authors:Cunhang Fan
Pages:updating ...
Year:2025
Publisher:x
Language:english
File Size:2.53 MB
Format:pdf
Categories: Ebooks

Product desciption

Seeing helps hearing: A multi-modal dataset and a mamba-based dual branch parallel network for auditory attention decoding by Cunhang Fan instant download

Information Fusion, 118 (2025) 102946. doi:10.1016/j.inffus.2025.102946

EEG-based auditory attention decoding (AAD) aims to identify the attended speaker from the listener’s EEGsignals. Existing datasets mainly focus on auditory stimuli, ignoring real-world multi-modal inputs. To addressthis, a new multi-modal AAD dataset (MM-AAD) is constructed, representing the first dataset to include audio–visual stimuli. Additionally, prior studies mostly extract single-domain features, neglecting complementarytemporal and frequency domain information, which can perform well in the within-trial setting but poorly inthe cross-trial setting. Therefore, a framework called Mamba-based dual branch parallel network (M-DBPNet)is proposed, effectively fusing temporal and frequency domain features. By adding Mamba, temporal featuresin time sequence signals are better extracted. Experimental results show that Mamba enhances decodingperformance in the within-trial setting with fewer parameters and demonstrates strong generalization inthe cross-trial setting. Visualization analysis indicates that visual stimuli strengthen evoked responses andactivation in temporal and occipital lobes, enhancing auditory perception and decoding performance.

*Free conversion of into popular formats such as PDF, DOCX, DOC, AZW, EPUB, and MOBI after payment.

Related Products