Project publications


  1. Arora, A., H. Fiorino, D. Pellier, M. Métiver and S. Pesty (to appear) A Review of Learning Planning Action Models. The Knowledge Engineering Review, volume 33.
  2. Bailly, G., A. Mihoub, C. Wolf & F. Elisei (2018) Gaze and face-to-face interaction: from multimodal data to behavioral models. Advances in Interaction Studies. Eye-tracking in interaction. Studies on the role of eye gaze in dialogue. G. Brône and B. Oben. Amsterdam, John Benjamins: pp. 139-168.
  3. Bailly, G. & F. Elisei (2018) Demonstrating and learning multimodal socio-communicative behaviors for HRI: building interactive models from immersive teleoperation data, AI-MHRI: AI for Multimodal Human Robot Interaction Workshop at the Federated AI Meeting (FAIM), Stockholm - Sweden: pp. 39-43.
  4. Nguyen, D.-C., G. Bailly & F. Elisei (2018) Comparing cascaded LSTM architectures for generating gaze-aware head motion from speech in HAI task-oriented dialogs, HCI International, Las Vegas, USA: pp. 164-175. DOI. (HAL)
  5. Cambuzat, R., Elisei, F., Bailly, G., Simonin, O., & Spalanzani, A. (2018) Immersive teleoperation of the eye gaze of social robots, International Symposium on Robotics (ISR), Munich, Germany: pp. 232-239.
  6. Nguyen, V. Q., L. Girin, G. Bailly, F. Elisei & D.-C. Nguyen (2018) Autonomous sensorimotor learning for sound source localization by a humanoid robot, Workshop on Crossmodal Learning for Intelligent Robotics in conjunction with IEEE/RSJ IROS 2018, Madrid, Spain.
  7. M. Polceanu, F. Harrouet and C. Buche (2018) Fast Multi-Scale fHOG Feature Extraction Using Histogram Downsampling. RoboCup'2018 Symposium, Montréal, Canada.
  8. Arora, A., H. Fiorino, D. Pellier, and S. Pesty (2018) Learning Robot Speech Models to Predict Speech Acts in HRI. Paladyn. Journal of Behavioral Robotics, 9(1), pages 285-306.
  9. 2017

  10. Arora, A., H. Fiorino, D. Pellier and S. Pesty (2017) Action Model Acquisition using Sequential Pattern Mining. In the proceedings of the German Conference on Artificial Intelligence, pages 286-292.
  11. Nguyen, D.A., G. Bailly & F. Elisei (2017) "Learning Off-line vs. On-line Models of Interactive Multimodal Behaviors with Recurrent Neural Networks", Pattern Recognition Letters, 100C:29-36.
  12. Nguyen, D.-C., G. Bailly & F. Elisei (2017)  An evaluation framework to assess and correct the multimodal behavior of a humanoid robot in human-robot interaction, Gesture in Interaction (GESPIN), Posznan, Poland: pp. 56-62.
  13. Cambuzat R., G. Bailly & F. Elisei (2017) Gaze contingent control of vergence, yaw and pitch of robotic eyes for immersive telepresence, European Conf. on Eye Movements (ECEM), Wuppertal, Germany.
  14. 2016

  15. [BEST PAPER AWARD] Nguyen, D.-C., G. Bailly & F. Elisei (2016) "Conducting neuropsychological tests with a humanoid robot: design and evaluation", IEEE Int. Conf. on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland, pp. 337-342.
  16. Bailly G., F. Elisei, A. Juphard & O. Moreau (2016) "Quantitative analysis of backchannels uttered by an interviewer during neuropsychological tests" Interspeech, San Francisco, CA, pp. 2905-2909.
  17. Mihoub, A., G. Bailly, C. Wolf & F. Elisei (2016). "Graphical models for social behavior modeling in face-to face interaction", Pattern Recognition Letters (PRL).74, 82-89, DOI:10.1016/j.patrec.2016.02.005.
  18. Bailly G. (2016) "Critical review of the book 'Gaze in Human-Robot Communication'", Journal on Multimodal User Interfaces, 1-2, DOI: 10.1007/s12193-016-0219-6.
  19. Nguyen D.C., F. Elisei & G. Bailly (2016). "Demonstrating to a humanoid robot how to conduct neuropsychological tests", Journées Nationales de Robotique Humanoïde (JNRH), Toulouse, France: pp.10-12.
  20. Adam C., W. Johal, D. Pellier, H. Fiorino & S. Pesty (2016) "Social Human-Robot Interaction: A New Cognitive and Affective Interaction-Oriented Architecture", International Conference on Social Robotics (ICSR), Kansas City, MO: pp.253-263.
  21. Arora A., H. Fiorino, D. Pellier & S. Pesty (2016) "A Review on Learning Planning Action Models for Socio Communicative HRI", Workshop on Affect, Compagnon Artificiel and Interaction (WACAI), Brest, France.
  22. 2015

  23. Parmiggiani, A., M. Randazzo, M. Maggiali, G. Metta, F. Elisei & G. Bailly (2015). "An articulated talking face for the iCub." International Journal of Humanoid Robotics (IJHR), 1550026:1-20, DOI:10.1142/S0219843615500267.
  24. Guillermo G., C. Plasson, F. Elisei, F. Noël & G. Bailly (2015) "Qualitative assesment of a beaming environment for collaborative professional activities", European conference for Virtual Reality and Augmented Reality (EuroVR), Milano, Italy.
  25. Foerster F., G. Bailly & F. Elisei (2015) "Impact of iris size and eyelids coupling on the estimation of the gaze direction of a robotic talking head by human viewers", Humanoids, Seoul, Korea: pp.148-153.
  26. Bailly, G., F. Elisei & M. Sauze (2015) "Beaming the gaze of a humanoid robot", Human-Robot Interaction (HRI), Portland, OR: pp. 47-48.
  27. [BEST PAPER AWARD] Johal W., D. Pellier, C. Adam, H. Fiorino & S. Pesty (2015) "A Cognitive and Affective Architecture for Social Human-Robot Interaction", Human Robot Interaction (HRI), Portland, OR: pp. 71-72.
  28. Bailly, G., A. Mihoub, C. Wolf & F. Elisei (2015) "Learning joint multimodal behaviors for face-to-face interaction: performance & properties of statistical models", Human-Robot Interaction (HRI). Workshop on behavior coordination between animals, humans and robots, Portland, OR.
  29. Mihoub, A., G. Bailly, C. Wolf & F. Elisei (2015) "Learning multimodal behavioral models for face-to-face social interaction", Journal on Multimodal User Interfaces (JMUI) :1-16. DOI:10.1007/s12193-015-0190-7.