Sep 2023 – Sep 2025

[ICRA26] Soft Robotics Perception Research

First Author

Submitted for Publication:

  • Qinsong Guo*, Yang Ke* , Hanwen Zhao, Haohan Fang, Haoxuan Wang, Chen Feng, "Visual-Auditory Proprioception of Soft Finger Shape and Contact," ICRA 2026.

Previously, we found that while internal vision helps with reconstructing the robotic finger’s general shape, it fails when the finger is bent too much, or if there are external contacts that block the camera’s field of view. To combat the problem of large bending and occlusion, we introduced the sensing modality of audio, which can bounce around corners and act as “spatial sensors” in areas not detectable by vision. We demonstrated that the multimodal auditory + visual sensing has made both pose and contact sensing more robust across complex deformation scnearios.

My contributions:

  • Proposed and implemented a deep learning network that fuses auditory and visual data for robust contact localization.

  • Advanced a FoldingNet-based encoder-decoder framework that first reconstructs global finger shape, then local contacts, using auditory modality to effectively overcome severe visual occlusion.

  • Designed and fabricated an exoskeleton system to actuate the soft robotic gripper, utilizing CAD and advanced manufacturing to balance dexterity and payload.

This work has been submitted for ICRA 2026.

Finger Demo:

Next
Next

[Sci. Rep.] High Performance Wearable Joint Sensor