Dr Wenxuan Mou (牟雯萱)

Senior Research Scientist

Huawei Noah’s Ark Lab, London Research Center, UK

Welcome

I am currently a senior research scientist in the mobile vision perception group of Huawei Noah Ark Lab.
Before joining Huawei, I was a postdoctoral researcher working on the THRIVE++ project that is funded by the Air Force Office of Scientific Research (AFOSR-EOARD) with the aim of investigating the embodiment and socio-cognitive mechanisms in the development of trust between humans and robots involved in interactions and joint tasks. I was also working on the UKRI Trustworthy Autonomous Systems project node on Trust that aims at Investigating how to build, maintain and manage trust in robotic and autonomous systems. My research was partially supported by the H2020 EU project MoveCare focusing on designing machine learning methods for HRI and elderly care.
Prior to this, I obtained my PhD from Queen Mary University of London. My work focused on emotion recognition in group videos. During my PhD, I also worked as a research intern at at Mitsubishi Electric Research Laboratories (MERL), Cambridge, USA and worked as a research assistant at the Computer Laboratory, University of Cambridge.

Research Interests

  • Computer Vision
  • Machine Learning
  • Affective Computing
  • Human-Robot Interaction
  • Media coverage

  • Robots for Resilient Infrastructure Challenge 2017(Youtube)
  • Publications[Google Scholar Profile]

    Journal Papers:

  • Wenxuan Mou, Hatice Gunes and Ioannis Patras, “Alone vs in-a-group: A Multi-modal Framework for Automatic Affect Recognition”, ACM Transactions on Multimedia Computing Communicationsand Applications, 2019.

  • Wenxuan Mou, Christos Tzelepis, Hatice Gunes, Vasileios Mezaris and Ioannis Patras, “Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues”, Image and Vision Computing, 2018.

  • Shan Luo, Wenxuan Mou, Kaspar Althoefer and Hongbin Liu, “iCLAP: Object Recognition by combining proprioception and tactile sensing”, Autonomous Robots, 2018.

  • Shan Luo, Wenxuan Mou, Kaspar Althoefer and Hongbin Liu, “Novel Tactile-SIFT Descriptor for Object Shape Recognition”, IEEE Sensors Journal, 2015.

  • International Conference & Workshop Proceedings:

  • Martina Ruocco, Wenxuan Mou, Debora Zanatto and Angelo Cangelosi, “Theory of Mind Improves Human’s Trust in an Iterative Human-Robot Game”, ACM International Conference on Human-Agent Interaction, 2021.

  • Wenxuan Mou, Tim Marks, Abhinav Kumar,Chen Feng, Xiaoming Liu, “Image Processing System and Method for Landmark Location Estimation with Uncertainty”, US Patent, 2021.

  • Wenxuan Mou, Martina Ruocco, Debora Zanatto and Angelo Cangelosi, “When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interaction”, IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020.

  • Wenxuan Mou*, Abhinav Kumar*, Tim Marks*, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng, “LUVLi Face Alignment: Estimating Landmarks’ Location, Uncertainty, and Visibility Likelihood”, International Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [* equal first authors]

  • Wenxuan Mou*, Abhinav Kumar*, Tim Marks*, Chen Feng, Xiaoming Liu, “UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss”, International Conference on Computer Vision Workshops (ICCVW), 2019. [Best Oral Paper Award, * equal first authors]

  • Wenxuan Mou, Hatice Gunes and Ioannis Patras, “Your Fellows Matter: Affect Analysis across Subjects in Group Videos”, IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2019.

  • Wenxuan Mou, Christos Tzelepis, Hatice Gunes, Vasileios Mezaris and Ioannis Patras, “Generic to Specific Recognition Models for Membership Analysis in Group Videos”, IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2017. [Oral]

  • Wenxuan Mou, Hatice Gunes and Ioannis Patras, “Alone versus In-a-group: A Comparative Analysis of Facial Affect Recognition”, ACM Multimedia Conference. Amsterdam, 2016.

  • Shan Luo Wenxuan Mou, Kaspar Althoefer and Hongbin Liu, “Iterative Closest Labeled Point for Tactile Object Shape Recognition”, IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), 2016.

  • Wenxuan Mou, Hatice Gunes and Ioannis Patras, “Automatic Recognition of Emotions and Membership in Group Videos”, International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Context-Based Affect Recognition, 2016.

  • Heng Yang, Wenxuan Mou, Yichi Zhang, Ioannis Patras, Hatice Gunes and Peter Robinson, “Face Alignment Assisted by Head Pose Estimation”, British Machine Vision Conference (BMVC), 2015.

  • Wenxuan Mou, Oya Celiktutan and Hatice Gunes, “Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context”, International Workshop on Emotion Representation, Analysis and Synthesis, IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2015

  • Shan Luo, Wenxuan Mou, Kaspar Althoefer and Hongbin Liu, “Localizing the object contact through matching tactile features with visual map”, IEEE International Conference on Robotics and Automation (ICRA), 2015.

  • Shan Luo, Wenxuan Mou, Min Li, Kaspar Althoefer and Hongbin Liu, “Rotation and translation invariant object recognition with a tactile sensor”, IEEE Sensors Conference, 2015.