Vous êtes ici : GIPSA-lab >CRISSPHome CRISSP
 
Team

COGNITIVE ROBOTICS, INTERACTIVE SYSTEMS, & SPEECH PROCESSING
Team manager : Gérard BAILLYThomas HUEBER

 

CRISSP team conducts theoretical, experimental and technological researches in the field of speech communication. More precisely, we aim at: 

    • Modeling verbal and co-verbal speech signal in face-to-face interaction involving humans, virtual avatar (talking head) and humanoid robots.
    • Understanding the human speech production process by modeling relationships between speech articulation and speech acoustics.
    • Studying communication of people with hearing impairment.
    • Designing speech technologies for handicapped people, language learning, and multimedia.

 

 

 

The 3 research axis of the CRISSP team are:

    • Cognitive robotics: improve socio-communicative skills of humanoid robots. 
    • Interactive systems: design real-time/reactive communicative systems exploiting the different modalities of speech (audio, visual, gesture, etc.).
    • Speech processing: articulatory synthesis, acoustic-articulatory inversion, speech synthesis, voice conversion.

Domains of expertise of CRISSP team

    • Audio signal processing (analysis, coding, denoising, source separation)
    • Speech processing (analysis, transformation, conversion/morphing, text-to-speech synthesis, articulatory synthesis/inversion)
    • Statistical machine learning
    • Acquisition of multimodal articulatory data (using electromagnetic articulography, ultrasound imaging, MRI, EMG, etc.)
    • Acquisition of social signals (eye gaze, body posture, head movements, etc.) during face-to-face interaction

 

Team members

(updated 18/12/2015)

 

Contact : Gérard Bailly et Thomas Hueber (mail : firstname.lastname@gipsa-lab.fr)




Last publications of team

Online Localization of Multiple Moving Speakers in Reverberant Environments

Xiaofei Li, Bastien Mourgue, Laurent Girin, Sharon Gannot, Radu Horaud. Online Localization of Multiple Moving Speakers in Reverberant Environments. SAM 2018 - The Tenth IEEE Workshop on Sensor Array and Multichannel Signal Processing, Jul 2018, Sheffield, United Kingdom. pp.1-5. 〈hal-01795462〉

Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

Remi Cambuzat, Frédéric Elisei, Gérard Bailly, Olivier Simonin, Anne Spalanzani. Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes. ISR 2018 - 50th International Symposium on Robotics, Jun 2018, Munich, Germany. pp.232-239, 2018, 〈https://www.vde.com/en/events/isr〉. 〈hal-01779633〉

Use of deep features for the automatic classification of fish sounds

Marielle Malfante, Omar Mohammed, Cedric Gervaise, Mauro Dalla Mura, Jerome Mars. Use of deep features for the automatic classification of fish sounds. OCEANS’18 MTS/IEEE, May 2018, KOBE, Japan. 〈hal-01802551〉


Voir toutes les publications de l'équipe dans HAL
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX - 33 (0)4 76 82 71 31