Xiao Liu (Leo)

Arizona State University. Interactive Robotics Lab.

image

I am a junior Computer Science Ph.D student and I work with Prof. Heni Ben Amor at Interactive Robotics Lab in School of Computing and Augmented Intelligence at Arizona State University. My research is focusing on Differentiable Bayesian Filters, Representation Learning, Generalization and Optimization of Depth Perception, and their applications in Human-robot Interaction (HRI). I also work as a Research Engineer for a Phoenix and Seattle based start-up company, RadiusAI. Research opportunities in robotics, automation, and computer vision are all welcomed.


Work Experiences

Research Engineer, RadiusAI

RadiusAI | 2020 - Present

Worked on multiple projects with different research items across computer vision, differentiable filters, optimizations.

  • Developed monocular depth prediction models with varied advanced architecture-Vision Transformer (ViT) and multi-scale local planar guidance (LPG) blocks
  • Developed multi-objective optimization technique base on Frank-Wolfe algorithm for training across multiple datasets
  • Proposed Depth model shows 0.117, 0.416 on abs REL and RMS error metrics and 0.868 on d1 metric on NYU depth testset
  • Implemented differentiable Bayesian trackers for store intelligence

Research Associate, Interactive Robotics Lab

ASU IRL | 2020 - Present

Research topics: Differentiable Filters in Robot Imitation Learning and Human-robot Interaction. Monocular Depth Estimation Network for Human-robot and Human-environment interaction.

2022
  • Developed differentiable Ensemble Kalman Filters framework incorporating algorithmic priors for robot imitation learning, i.e., learning system dynamics from experience, missing observations, and learning representations in high-dimensional space
  • Proposed framework shows 56.4% and 79.7% improvements in high-dimensional tasks, and 56.3% and 83.3% in low-dimension, on RMSE and MAE error metrics, comparing to the State-of-the-Art
  • Implemented differentiable filters for HRI tasks, i.e., human-robot hugging, symbiotic walking
2021
  • Developed compact monocular depth prediction models and fine-tuned with temporal consistancy regularity
  • Created a human gait (lower body) dataset associating depth data with biomechanical data
  • Integrated depth prediction network with Interactive primitives for motor predictive modeling on an embedded system-Nvidia Jetson NX for real-time application

Research Assistant, CWRU

DIRL (now ART) | 2017 - 2019

Social robot project “Woody and Philos” team leader, collaborated research assistant of “e-Cube” project for human cognitive skill assessment, developed advanced algorithms in Computer Vision for broad.

  • Real-time Human Facial Emotion Expression Recognition for Human-robot Interaction using deep learning and machine learning technique-featured on Case Western Daily
  • Human centered biomedical devices–“e-Cube” for cognitive skills assessment
  • Developed social robots–“Philos” & “Woody” from the kinematics to the high-level control-featured on ideastream

Publications, Presentations, …

Papers

Liu, X, Clark, G & Ben Amor, H. “Differentiable Ensemble Kalman Filters for Robot State Estimation” IEEE RSJ International Conference on Intelligent Robots and Systems (IROS) (submitted).

Liu, X, Cheng, X & Lee, K. “GA and SVM based Facial Emotion Recognition using Geometric Features” IEEE Sensors Journal 21, no. 10 (2020): 11532-11542.

Hayosh, D, Liu, X & Lee, K. “Woody: Low-Cost Open-source Humanoid Torso Robot” IEEE 17th International Conference on Ubiquitous Robots (UR), pp. 247-252, Kyoto, Japan.

Liu, X & Lee, K. “Optimized Facial Emotion Recognition Technique for Assessing User Experience.” 2018 IEEE Games Entertainment and Medias Conference. pp. 1-9, Galway, Ireland.

Workshops Presentations

Liu, X, Cheng, X & Lee, K. “e-Cube: Vision-based Interactive Block Games for Assessing Cognitive Skills: Design and Preliminary Evaluation,” CWRU ShowCASE.

Liu, X, Hayosh, D & Lee, K. “Woody: A New Prototype of Social Robotic Platform Integrated with Facial Emotion Recognition for Real-time Application,” CWRU ShowCASE.