I am a senior Computer Science Ph.D student and I work with Prof. Heni Ben Amor at Interactive Robotics Lab in School of Computing and Augmented Intelligence at Arizona State University. My research is focusing on Differentiable Bayesian Filters, Representation Learning, and their applications in Robot Learning and Human-robot Interaction (HRI). Several publications of my research are/were shown at ICRA, IROS, and CoRL. I also work as a Research Data Scientist for a Phoenix and Seattle based start-up company, RadiusAI. Feel free to check my CV for more details.
Research Engineer, RadiusAI
Worked on multiple projects with different research items across computer vision, differentiable filters, optimizations.
- Developed and refined Multi-object tracking (MOT) algorithms using Bayes Filter for Video Analytics for indoor and outdoor cameras.
- Developed monocular depth prediction models with varied advanced architecture-Vision Transformer (ViT) and multi-scale local planar guidance (LPG) blocks
- Developed multi-objective optimization technique base on Frank-Wolfe algorithm for training across multiple datasets
- Proposed Depth model shows 0.117, 0.416 on abs REL and RMS error metrics and 0.868 on d1 metric on NYU depth testset
Research Associate, Interactive Robotics Lab
Research topics: Differentiable Filters in Robot Learning and Human-robot Interaction. Monocular Depth Estimation Network for Human-robot and Human-environment interaction.
- Created a multimodal learning framework (α-MDF) using attention mechanism and differentiable filtering, which conducts state estimation in latent space with multiple modalities. Experimented on real-world tasks and validated the system on both rigid body robots and soft robots.
- Developed differentiable Ensemble Kalman Filters framework incorporating algorithmic priors for robot imitation learning, i.e., learning system dynamics from experience, missing observations, and learning representations in high-dimensional space
- Proposed framework shows 56.4% and 79.7% improvements in high-dimensional tasks, and 56.3% and 83.3% in low-dimension, on RMSE and MAE error metrics, comparing to the State-of-the-Art
- Implemented differentiable filters for HRI tasks, i.e., human-robot hugging, symbiotic walking
- Developed compact monocular depth prediction models and fine-tuned with temporal consistancy regularity
- Created a human gait (lower body) dataset associating depth data with biomechanical data
- Integrated depth prediction network with Interactive primitives for motor predictive modeling on an embedded system-Nvidia Jetson NX for real-time application
Research Assistant, CWRU
Social robot project “Woody and Philos” team leader, collaborated research assistant of “e-Cube” project for human cognitive skill assessment, developed advanced algorithms in Computer Vision for broad.
- Real-time Human Facial Emotion Expression Recognition for Human-robot Interaction using deep learning and machine learning technique-featured on Case Western Daily
- Human centered biomedical devices–“e-Cube” for cognitive skills assessment
- Developed social robots–“Philos” & “Woody” from the kinematics to the high-level control-featured on ideastream
Publications, Presentations, …
Weigend, F, Liu, X, Sonawani, S & Ben Amor, H. “iRoCo: Intuitive Robot Control from Anywhere using a Smartwatch” IEEE International Conference on Robotics and Automation (ICRA), in review.
Liu, X, Zhou Y, Ikemoto S & Ben Amor H. “α-MDF: An Attention-based Multimodal Differentiable Filter for Robot State Estimation” 7th Conference on Robot Learning (CoRL 2023).
Weigend, F, Liu, X & Ben Amor, H. “Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches” IEEE 2023 IROS Workshop DiffPropRob.
Liu, X, Clark, G, Campell, J, Zhou, Y & Ben Amor, H. “Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters” IEEE RSJ International Conference on Intelligent Robots and Systems (IROS).
Liu, X, Ikemoto, S, Yoshimitsu, Y & Ben Amor, H. “Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings” IEEE RSJ International Conference on Intelligent Robots and Systems (IROS).
Liu, X, Cheng, X & Lee, K. “GA and SVM based Facial Emotion Recognition using Geometric Features” IEEE Sensors Journal 21, no. 10 (2020): 11532-11542.
Hayosh, D, Liu, X & Lee, K. “Woody: Low-Cost Open-source Humanoid Torso Robot” IEEE 17th International Conference on Ubiquitous Robots (UR), pp. 247-252, Kyoto, Japan.
Liu, X & Lee, K. “Optimized Facial Emotion Recognition Technique for Assessing User Experience.” 2018 IEEE Games Entertainment and Medias Conference. pp. 1-9, Galway, Ireland.
Liu, X, Cheng, X & Lee, K. “e-Cube: Vision-based Interactive Block Games for Assessing Cognitive Skills: Design and Preliminary Evaluation,” CWRU ShowCASE.
Liu, X, Hayosh, D & Lee, K. “Woody: A New Prototype of Social Robotic Platform Integrated with Facial Emotion Recognition for Real-time Application,” CWRU ShowCASE.