AUTHOR=Kuruvila Ivine , Muncke Jan , Fischer Eghart , Hoppe Ulrich TITLE=Extracting the Auditory Attention in a Dual-Speaker Scenario From EEG Using a Joint CNN-LSTM Model JOURNAL=Frontiers in Physiology VOLUME=Volume 12 - 2021 YEAR=2021 URL=https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2021.700655 DOI=10.3389/fphys.2021.700655 ISSN=1664-042X ABSTRACT=Human brain performs remarkably well in segregating a particular speaker from interfering speakers in a multi-speaker scenario. We can quantitatively evaluate the segregation capability by modelling a relationship between the speech signals present in an auditory scene, and the listener's cortical signals measured using electroencephalography (EEG). This has opened up avenues to integrate neuro-feedback into hearing aids where the device can infer user's attention and enhance the attended speaker. Commonly used algorithms to infer the auditory attention are based on linear systems theory where cues such as speech envelopes are mapped on to the EEG signals. Here, we present a neural network based joint convolutional neural network (CNN) - long short-term memory (LSTM) model to infer the auditory attention. Our joint CNN-LSTM model takes the EEG signals and the spectrogram of multiple speakers as inputs and classifies the attention to one of the speakers. We evaluated the reliability of our network using three different datasets comprising of 61 subjects, where each subject undertook a dual-speaker experiment. The three datasets analysed corresponded to speech stimuli presented in three different languages namely German, Danish and Dutch. Using the proposed joint CNN-LSTM model, we obtained a median decoding accuracy of 77.2% at a trial duration of three seconds. Furthermore, we evaluated the amount of sparsity that the model can tolerate by means of magnitude pruning and found a tolerance of up to 50% sparsity without substantial loss of decoding accuracy.