The 8th Christian Benoît Award has been attributed to Mathilde Fort from the Center for Brain and Cognition at University Pompeu Fabra, Barcelona. Mathilde’s project is entitled "InfEyMo" for "Infant learning from the Eyes and the Mouth of a talking face". It aims at characterizing by which mechanisms early bilingualism (that is language specific experience) impacts the way infants explore talking faces (in relation to audiovisual speech perception). More specifically the planned studies will consist in measuring how much infants rely on audiovisual redundant speech cues and how this can impact the way they learn from talking heads. To address these questions, the project involves eye-tracking studies with infants growing up in monolingual and bilingual environment. The project is realized in the frame of Mathilde’s post-doctoral fellowship at the University Pompeu Fabra and benefit from the babylab of the Center for Brain and Cognition in Spain, Cataluña, where the Spanish-Catalan bilingualism is extremely frequent. Mathilde Fort officially received her Award during the closing ceremony of Interspeech 2015 in Dresden in September 2015.
The 7th Christian Benoît Award has been attributed to Samer Al Moubayed from the Center for Speech Technology, KTH - Royal Institute of Technology Stockholm. His project aims at developing a system architecture and a demonstrator using the novel, recently developed Furhat robot head. The Furhat robot head integrates facial animation and visual speech synthesis. The goal is to provide a platform for research on situated and multiparty audio-visual interaction. Samer Al Moubayed officially received his Award during the closing ceremony of Interspeech 2013 in Lyon in August 2013.
The sixth Christian Benoît Award has been attributed during the closing ceremony of the InterSpeech2011 Conference in Firenze, to Thomas Hueber from the Speech and Cognition Department in GIPSA-Lab in Grenoble. The goal of his Ultraspeech II project is to build a real-time prototype of a silent speech interface (SSI), i.e. a device allowing speech communication without the need to vocalize. SSI could be used in situations where silence is required (as a silent cell phone), or for communication in very noisy environments. Further applications are possible in the medical field. For example, SSI could be used by laryngectomized patients as an alternative to electrolarynx which provides a very robotic voice; to oesophageal speech, which is difficult to master; or to tracheo-oesoephageal speech, which requires additional surgery. This system is based on the analysis of vocal tract configuration during silent articulation using ultrasound and video imaging. Articulatory movements are captured by a non-invasive multimodal imaging system composed of an ultrasound transducer placed beneath the chin and a video camera in front of the lips. Articulatory features extracted from the visual data are converted into audible speech using statistical mapping techniques (ANN/GMM/HMM)
The fifth Christian Benoît Award has been attributed during the closing ceremony of the InterSpeech2010 Conference in Brighton, to Sascha Fagel from the Department for Language and Communication of the Berlin Institute of Technology. His project called Thea (Talking Heads for Elderly and Alzheimer Patients in Ambient Assisted Living) aims at developing a multimedia software enabling an easy generation of personalized talking faces that could be used to facilitate the interaction of cognitive impaired patients with portable systems in daycare environments. This project is based on his former works about the design of talking heads.
The fourth Christian Benoît Award has been attributed during the closing ceremony of the ICSLP'2007 Conference in Antwerpen, to Susan Fuchs, a phonetician from ZAS/Phonetik at the Berlin University, for a project centred on the study of acoustic and articulatory intra- and inter-speaker variability within a given phonological system, the mapping between speaker-specific acoustic properties and articulatory strategies and the origins/causes of acoustic and articulatory behaviour and its variability. The aim of the project is to explore how vocal tract geometries affect speaker-specific articulatory strategies for a given auditory target. The multimedia tool consists of three successive parts, starting from simple tube models and progressing to more sophisticated biomechanical modelling. The tool is intended to distribute basic research and it can be used in teaching undergraduate and graduate students.
The third Christian Benoît Award has been
attributed during the closing ceremony of the ICSLP'2004 Conference in
Jeju Island to Olov Engvall, from the Centre for Speech Technology,
KTH, Stockhlom. Olov Engvall's primary research interest is multi-modal
speech production modeling, focused on intraoral aspects of
articulation, which 2002 resulted in a PhD thesis entitled "Tongue
Talking - Studies in intraoral speech synthesis". The basis for this
research is statistical analysis of measurements from Magnetic
Resonance Imaging, Electromagnetic Articulography and
Electropalatography, and as such the work has been very much oriented
towards academic basic research. The goal consists in taking advantage
of this basic research in a multi-modal articulation tutor (ARTUR) able
to give automatic feedback to real users, primarily hearing-impaired
children. Financial support from the Christian Benoît award was
used to support the initial stages of the project in order to create a
Wizard of Oz version of the system, to conduct a preliminary study of
the multimodal feedback display using test subjects from the target
group and to produce an information video on the articulation tutor
based on the Wizard of Oz system. The "ARTUR" project is presented on
The second Christian Benoît Award has been attributed during the closing ceremony of the ICSLP'2002 Conference in Denver, to Johanna Barry who worked at the Bionic Ear Institute in Melbourne. In the project, called "ToneDoctor", she submitted for the Christian Benoît Prize, Johanna Barry proposed the development of a series of interactive Web pages and an interactive CD-Rom presenting her research, with a view to obtaining financial support for the development of her clinical tool and attracting clinical partners from Cantonese-speaking countries with a potential interest in collaborating on the development of the tool.
The first prize was awarded in June 2000 to Tony
Ezzat from the Massachusetts Institute of Technology, for his work in
the field of audiovisual speech synthesis. His talkng head project,
called «Mary, A Videorealistic Text-to-audiovisual speech