Polaris

Open-ended Interactive Robotic Manipulation

via Syn2Real Visual Grounding and

Large Language Models

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024

Fudan University

Polaris: A tabletop-level object robotic manipulation framework centered on syn2real visual grounding driven by open-ended interaction with GPT-4. Users engage in continuous, open-ended interaction with LLM, which maintains an ongoing comprehension of the scenes. 3D synthetic data is integrated into the training of grounded vision modules to facilitate the execution of real-world robotic tasks.

Abstract

This paper investigates the task of the open-ended interactive robotic manipulation on table-top scenarios.

While recent Large Language Models (LLMs) enhance robots' comprehension of user instructions, their lack of visual grounding constrains their ability to physically interact with the environment. This is because the robot needs to locate the target object for manipulation within the physical workspace. To this end, we introduce an interactive robotic manipulation framework called Polaris, which integrates perception and interaction by utilizing GPT-4 alongside grounded vision models.

For precise manipulation, it is essential that such grounded vision models produce detailed object pose for the target object, rather than merely identifying pixels belonging to them in the image. Consequently, we propose a novel Synthetic-to-Real (Syn2Real) pose estimation pipeline. This pipeline utilizes rendered synthetic data for training and is then transferred to real-world manipulation tasks. The real-world performance demonstrates the efficacy of our proposed pipeline and underscores its potential for extension to more general categories. Moreover, real-robot experiments have showcased the impressive performance of our framework in grasping and executing multiple manipulation tasks. This indicates its potential to generalize to scenarios beyond the tabletop.



Method Overview

Overview of Polaris. (a) 3D synthetic data rendering. During rendering, we automatically generate various synthetic data by loading 3D model assets into a simulation engine and deploying dynamic virtual camera. We use the Fibonacci Sphere Sampling to select rendering viewpoints, to generate corresponding RGB, depth, pose, and observable point clouds. (b) The vision-centric robotic task pipeline. Given the image of the scene, which GPT-4, prompted as a scene perception and interaction LLM, interprets to understand instructions and describe objects and tasks. Our parser interprets these descriptions. We freeze the pre-trained detector and segmentation model within the grounded vision models and use a synthetic dataset to train the category-level pose estimation model. After retrieving object attributes, the model predicts poses based on the scene, allowing a 6D pose robot manipulation planner to execute real-world tasks.

Example Qualitative Results of our Syn2Real Pose Estimation

Test results of single-object scene. We present a subset of the visualization results of the pose and size estimation using the trained MVPoseNet6D model. The outcomes are represented with a tightly oriented 3D bounding box and colored XYZ-axis.

The scene with same object under multiple views. We show the pose of a bottle under different views.

The scene with multiple objects under the same view. We show the pose estimation of different objects in several cluttered scenes.

Examples of Open-ended Interactive Real-robot Experiments

Manipulation tasks for three different base scenes are presented, including excerpts from the interaction process between the user and the LLM, the pose estimation results of the manipulated objects in different scenes, and the keyframes of the robot manipulation. Scene A: Stack bottles on the table. Scene B: Tidy the items of workbench. Scene C: A compositional task considering the affordance of objects after a sudden collision.

Acknowledgements

We thank CFFF platform of Fudan University, and SAPIEN for providing lightweight rendering engine. This work was supported in part by Shanghai Platform for Neuromorphic and AI Chip under Grant 17DZ2260900 (NeuHelium).