Xiao Liu (Leo)
- Tempe, Arizona
- [email protected]
- My Scholar Profile
- My LinkedIn Profile
I am a junior Computer Science Ph.D student and I work with Prof. Heni Ben Amor at Interactive Robotics Lab in School of Computing and Augmented Intelligence at Arizona State University. My research is focusing on Differentiable Bayesian Filters, Representation Learning, and their applications in Robot Learning and Human-robot Interaction (HRI). I also work as a Research Engineer for a Phoenix and Seattle based start-up company, RadiusAI. Research opportunities in robotics, automation, and computer vision are all welcomed.
Research Engineer, RadiusAI
Worked on multiple projects with different research items across computer vision, differentiable filters, optimizations.
- Developed monocular depth prediction models with varied advanced architecture-Vision Transformer (ViT) and multi-scale local planar guidance (LPG) blocks
- Developed multi-objective optimization technique base on Frank-Wolfe algorithm for training across multiple datasets
- Proposed Depth model shows 0.117, 0.416 on abs REL and RMS error metrics and 0.868 on d1 metric on NYU depth testset
- Working with tracking team, develope and refine Multi-object tracking (MOT) algorithms for store intelligence
Research Associate, Interactive Robotics Lab
Research topics: Differentiable Filters in Robot Learning and Human-robot Interaction. Monocular Depth Estimation Network for Human-robot and Human-environment interaction.
- Developed differentiable Ensemble Kalman Filters framework incorporating algorithmic priors for robot imitation learning, i.e., learning system dynamics from experience, missing observations, and learning representations in high-dimensional space
- Proposed framework shows 56.4% and 79.7% improvements in high-dimensional tasks, and 56.3% and 83.3% in low-dimension, on RMSE and MAE error metrics, comparing to the State-of-the-Art
- Implemented differentiable filters for HRI tasks, i.e., human-robot hugging, symbiotic walking
- Developed compact monocular depth prediction models and fine-tuned with temporal consistancy regularity
- Created a human gait (lower body) dataset associating depth data with biomechanical data
- Integrated depth prediction network with Interactive primitives for motor predictive modeling on an embedded system-Nvidia Jetson NX for real-time application
Research Assistant, CWRU
Social robot project “Woody and Philos” team leader, collaborated research assistant of “e-Cube” project for human cognitive skill assessment, developed advanced algorithms in Computer Vision for broad.
- Real-time Human Facial Emotion Expression Recognition for Human-robot Interaction using deep learning and machine learning technique-featured on Case Western Daily
- Human centered biomedical devices–“e-Cube” for cognitive skills assessment
- Developed social robots–“Philos” & “Woody” from the kinematics to the high-level control-featured on ideastream
Publications, Presentations, …
Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
Liu, X, Clark, G, Campell, J, Zhou, Y & Ben Amor, H. “Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters” IEEE RSJ International Conference on Intelligent Robots and Systems (IROS) (submitted).
Liu, X, Ikemoto, S, Yoshimitsu, Y & Ben Amor, H. “Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings” IEEE RSJ International Conference on Intelligent Robots and Systems (IROS) (submitted).
Liu, X, Cheng, X & Lee, K. “GA and SVM based Facial Emotion Recognition using Geometric Features” IEEE Sensors Journal 21, no. 10 (2020): 11532-11542.
Hayosh, D, Liu, X & Lee, K. “Woody: Low-Cost Open-source Humanoid Torso Robot” IEEE 17th International Conference on Ubiquitous Robots (UR), pp. 247-252, Kyoto, Japan.
Liu, X & Lee, K. “Optimized Facial Emotion Recognition Technique for Assessing User Experience.” 2018 IEEE Games Entertainment and Medias Conference. pp. 1-9, Galway, Ireland.
Liu, X, Cheng, X & Lee, K. “e-Cube: Vision-based Interactive Block Games for Assessing Cognitive Skills: Design and Preliminary Evaluation,” CWRU ShowCASE.
Liu, X, Hayosh, D & Lee, K. “Woody: A New Prototype of Social Robotic Platform Integrated with Facial Emotion Recognition for Real-time Application,” CWRU ShowCASE.