Vous êtes ici : GIPSA-lab >CRISSPHome CRISSP
 
Team

COGNITIVE ROBOTICS, INTERACTIVE SYSTEMS, & SPEECH PROCESSING
Team manager : Gérard BAILLYThomas HUEBER

 

CRISSP team conducts theoretical, experimental and technological researches in the field of speech communication. More precisely, we aim at: 

    • Modeling verbal and co-verbal speech signal in face-to-face interaction involving humans, virtual avatar (talking head) and humanoid robots.
    • Understanding the human speech production process by modeling relationships between speech articulation and speech acoustics.
    • Studying communication of people with hearing impairment.
    • Designing speech technologies for handicapped people, language learning, and multimedia.

 

 

 

The 3 research axis of the CRISSP team are:

    • Cognitive robotics: improve socio-communicative skills of humanoid robots. 
    • Interactive systems: design real-time/reactive communicative systems exploiting the different modalities of speech (audio, visual, gesture, etc.).
    • Speech processing: articulatory synthesis, acoustic-articulatory inversion, speech synthesis, voice conversion.

Domains of expertise of CRISSP team

    • Audio signal processing (analysis, coding, denoising, source separation)
    • Speech processing (analysis, transformation, conversion/morphing, text-to-speech synthesis, articulatory synthesis/inversion)
    • Statistical machine learning
    • Acquisition of multimodal articulatory data (using electromagnetic articulography, ultrasound imaging, MRI, EMG, etc.)
    • Acquisition of social signals (eye gaze, body posture, head movements, etc.) during face-to-face interaction

 

Team members

(updated 18/12/2015)

 

Contact : Gérard Bailly et Thomas Hueber (mail : firstname.lastname@gipsa-lab.fr)



News
ParutionBiosignal-Based Spoken Communication

Special Issue edited by Tanja Schultz ; Thomas Hueber ; Dean J. Krusienski ; Jonathan S. Brumberg
Editeur : IEEE/ACM Transactions on Audio, Speech, and Language Processing, volume 25, n° 12, December 2017

Lire la suite



Latest publications of team

Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning

Emilie Gerbier, Gérard Bailly, Marie-Line Bosse. Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning. Computer Speech and Language, Elsevier, 2018, 47 (january), pp.79-92. 〈10.1016/j.csl.2017.07.003〉. 〈hal-01575227〉

Pattern Recognition Letters Learning Off-line vs. On-line Models of Interactive Multimodal Behaviors with Recurrent Neural Networks

Duc Canh Nguyen, Gérard Bailly, Frédéric Elisei. Pattern Recognition Letters Learning Off-line vs. On-line Models of Interactive Multimodal Behaviors with Recurrent Neural Networks. Pattern Recognition Letters, Elsevier, 2017, 100, pp.29 - 36. 〈10.1016/j.patrec.2017.09.033〉. 〈hal-01609535〉

Biosignal-Based Spoken Communication: A Survey

Tanja Schultz, Thomas Hueber, Michael Wand, Dean J. Krusienski, Christian Herff, et al.. Biosignal-Based Spoken Communication: A Survey. IEEE/ACM Transactions on Audio, Speech and Language Processing, Institute of Electrical and Electronics Engineers, 2017, 25 (12), pp.2257 - 2271. 〈10.1109/TASLP.2017.2752365〉. 〈hal-01652757〉


All publications of team
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX - 33 (0)4 76 82 71 31