Xiao Liu (Leo)

Arizona State University. Interactive Robotics Lab.

image

I am a senior Computer Science Ph.D student and I work with Prof. Heni Ben Amor at Interactive Robotics Lab in School of Computing and Augmented Intelligence at Arizona State University. My research is focusing on Robot Learning, Representation Learning, and their applications in Embodied AI and Human-robot Interaction (HRI). Several publications of my research are/were shown at CoRL, ICRA, and IROS. I believe the utilization of foundation models for scaling up is essential but it requires one's profound understanding of the system for leveraging them effectively. I am currently working on diffusion-based policies from a Bayesian perspective. Feel free to check my curriculum vitae for more details.


Work Experiences

Research Associate, Interactive Robotics Lab

ASU IRL | 2020 - Present

Research topics: Robot Learning via Deep State-Space Modeling

  • Embodied AI: Proposed Diff-Control, an action diffusion policy incorporating ControlNet from the domain of image generation to robot actions generation.
  • Created a multimodal learning framework (α-MDF) using attention mechanism and differentiable filtering, which conducts state estimation in latent space with multiple modalities. Experimented on real-world tasks and validated the system on both rigid body robots and soft robots.
  • Developed differentiable Ensemble Kalman Filters framework incorporating algorithmic priors for robot imitation learning, i.e., learning system dynamics from experience, missing observations, and learning representations in high-dimensional space.
  • Deployed the differentiable filtering framework with smartwatch for ubiquitous robot control tasks, i.e., remote teleoperation, drone piloting.

Research Engineer (part-time), RadiusAI

RadiusAI | 2020 - 2024

Worked on multiple projects with different research items across computer vision, differentiable filters, optimizations.

  • Developed and refined Multi-object tracking (MOT) algorithms using Bayes Filter for Video Analytics for indoor and outdoor cameras.
  • Developed monocular depth prediction models with varied advanced architecture-Vision Transformer (ViT) and multi-scale local planar guidance (LPG) blocks
  • Developed multi-objective optimization technique base on Frank-Wolfe algorithm for training across multiple datasets
  • Proposed Depth model shows 0.117, 0.416 on abs REL and RMS error metrics and 0.868 on d1 metric on NYU depth testset

Research Assistant, CWRU

DIRL (now ART) | 2017 - 2019

Social robot project “Woody and Philos” team leader, collaborated research assistant of “e-Cube” project for human cognitive skill assessment, developed advanced algorithms in Computer Vision for broad.

  • Real-time Human Facial Emotion Expression Recognition for Human-robot Interaction using deep learning and machine learning technique-featured on Case Western Daily
  • Human centered biomedical devices–“e-Cube” for cognitive skills assessment
  • Developed social robots–“Philos” & “Woody” from the kinematics to the high-level control-featured on ideastream

Publications, Presentations, …

Papers
Project image
Woody: Low-Cost Open-source Humanoid Torso Robot
Daniel Hayosh, Xiao Liu, Kiju Lee
IEEE 17th International Conference on Ubiquitous Robots (UR), pp. 247-252 , 2020
Project Page / Paper
Presentations

Xiao Liu, Xiangyi Cheng, Kiju Lee. “e-Cube: Vision-based Interactive Block Games for Assessing Cognitive Skills: Design and Preliminary Evaluation,” CWRU ShowCASE.

Xiao Liu, Daniel Hayosh, Kiju Lee. “Woody: A New Prototype of Social Robotic Platform Integrated with Facial Emotion Recognition for Real-time Application,” CWRU ShowCASE.