Vous êtes ici : GIPSA-lab >CRISSPHome CRISSP
 
Team

COGNITIVE ROBOTICS, INTERACTIVE SYSTEMS, & SPEECH PROCESSING
Team manager : Gérard BAILLYThomas HUEBER

 

CRISSP team conducts theoretical, experimental and technological researches in the field of speech communication. More precisely, we aim at: 

    • Modeling verbal and co-verbal speech signal in face-to-face interaction involving humans, virtual avatar (talking head) and humanoid robots.
    • Understanding the human speech production process by modeling relationships between speech articulation and speech acoustics.
    • Studying communication of people with hearing impairment.
    • Designing speech technologies for handicapped people, language learning, and multimedia.

 

 

 

The 3 research axis of the CRISSP team are:

    • Cognitive robotics: improve socio-communicative skills of humanoid robots. 
    • Interactive systems: design real-time/reactive communicative systems exploiting the different modalities of speech (audio, visual, gesture, etc.).
    • Speech processing: articulatory synthesis, acoustic-articulatory inversion, speech synthesis, voice conversion.

Domains of expertise of CRISSP team

    • Audio signal processing (analysis, coding, denoising, source separation)
    • Speech processing (analysis, transformation, conversion/morphing, text-to-speech synthesis, articulatory synthesis/inversion)
    • Statistical machine learning
    • Acquisition of multimodal articulatory data (using electromagnetic articulography, ultrasound imaging, MRI, EMG, etc.)
    • Acquisition of social signals (eye gaze, body posture, head movements, etc.) during face-to-face interaction

 

Team members

(updated 18/12/2015)

 

Contact : Gérard Bailly et Thomas Hueber (mail : firstname.lastname@gipsa-lab.fr)




Last publications of team

A variance modeling framework based on variational autoencoders for speech enhancement

Simon Leglaive, Laurent Girin, Radu Horaud. A variance modeling framework based on variational autoencoders for speech enhancement. IEEE International Workshop on Machine Learning for Signal Processing (MSLP 2018), Sep 2018, Aalborg, Denmark. 2018. 〈hal-01832826〉

A pilot study on Mandarin Cued Speech

Li Liu, Gang Feng. A pilot study on Mandarin Cued Speech. 2018. 〈hal-01845295〉

Comparing cascaded LSTM architectures for generating head motion from speech in task-oriented dialogs

Duc-Canh Nguyen, Gérard Bailly, Frédéric Elisei. Comparing cascaded LSTM architectures for generating head motion from speech in task-oriented dialogs. HCI International, Jul 2018, Las Vegas, United States. Springer, pp.164-175, 〈http://2018.hci.international/〉. 〈hal-01848063〉


Voir toutes les publications de l'équipe dans HAL
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX - 33 (0)4 76 82 71 31