Background and overall aims:

Very recent research in auditory neuroscience has managed to build good quantitative models able to effectively predict peripheral and cortical responses to natural sounds. Based on those promising steps towards speech recognition via optimized hierarchical neural networks, we hope to improve those models through training on neuroimaging data collected from normal hearing individuals. Through this proposed project, which combines human data with computational modelling, we aim to create a modular neuro-engineering application that can be easily modified in the future to mimic different hearing impairment types. As a starting point an algorithm performing the essential operations done by the auditory periphery will be created (ear and auditory nerve simulator). The second step will be to prepare a machine learning algorithm able to encode central responses to standardised object-based audio. We will use a partially available neuroimaging database built from passive EEG and fNIRS responses to audio objects as a training tool. At completion of the project, we will have an application that will be flexible enough to be retrained according to the hearing pathology we want to simulate. For example, in patients with auditory neuropathy, backtracking of the temporal encoding along the auditory pathway could be used and subsequently introduced in the model to selectively distort inputs and restore a pathological output. Treatments via cochlear implants or hearing aids can then be easily integrated, and the translation from concept to clinical application will be significantly accelerated.

General methods to be used in the project:

  • Acoustics
  • High-density EEG and/or fNIRS
  • GAN-based models for simulation or data generation

Suitable background of students: Engineer graduate Matlab skills Signal processing and a strong interest in machine learning.

Supervisor: Professor Gérard Loquet

For all student queries please email: [email protected]