Vous êtes ici : GIPSA-lab >CRISSPHome CRISSP
 
Team

COGNITIVE ROBOTICS, INTERACTIVE SYSTEMS, & SPEECH PROCESSING
Team manager : Gérard BAILLYThomas HUEBER

 

CRISSP team conducts theoretical, experimental and technological researches in the field of speech communication. More precisely, we aim at: 

    • Modeling verbal and co-verbal speech signal in face-to-face interaction involving humans, virtual avatar (talking head) and humanoid robots.
    • Understanding the human speech production process by modeling relationships between speech articulation and speech acoustics.
    • Studying communication of people with hearing impairment.
    • Designing speech technologies for handicapped people, language learning, and multimedia.

 

 

 

The 3 research axis of the CRISSP team are:

    • Cognitive robotics: improve socio-communicative skills of humanoid robots. 
    • Interactive systems: design real-time/reactive communicative systems exploiting the different modalities of speech (audio, visual, gesture, etc.).
    • Speech processing: articulatory synthesis, acoustic-articulatory inversion, speech synthesis, voice conversion.

Domains of expertise of CRISSP team

    • Audio signal processing (analysis, coding, denoising, source separation)
    • Speech processing (analysis, transformation, conversion/morphing, text-to-speech synthesis, articulatory synthesis/inversion)
    • Statistical machine learning
    • Acquisition of multimodal articulatory data (using electromagnetic articulography, ultrasound imaging, MRI, EMG, etc.)
    • Acquisition of social signals (eye gaze, body posture, head movements, etc.) during face-to-face interaction

 

Team members

(updated 18/12/2015)

 

Contact : Gérard Bailly et Thomas Hueber (mail : firstname.lastname@gipsa-lab.fr)




Last publications of team

A New Re-synchronization Method based Multi-modal Fusion for Automatic Continuous Cued Speech Recognition

Li Liu, Gang Feng, Denis Beautemps, Xiao-Ping Zhang. A New Re-synchronization Method based Multi-modal Fusion for Automatic Continuous Cued Speech Recognition. 2020. ⟨hal-02433830⟩

A pilot study on Mandarin Cued Speech

Li Liu, Gang Feng. A pilot study on Mandarin Cued Speech. 2020. ⟨hal-01845295v2⟩

Rehabilitation of speech disorders following glossectomy, based on ultrasound visual illustration and feedback

Marion Girod-Roux, Thomas Hueber, Diandra Fabre, Silvain Gerber, Mélanie Canault, et al.. Rehabilitation of speech disorders following glossectomy, based on ultrasound visual illustration and feedback. Clinical Linguistics & Phonetics, Taylor & Francis, 2020, pp.1-18. ⟨10.1080/02699206.2019.1700310⟩. ⟨hal-01977670⟩


Voir toutes les publications de l'équipe dans HAL
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX - 33 (0)4 76 82 71 31