Getting your Trinity Audio player ready...
|
Inspired by humans’ competence in dealing with unfamiliar items, a group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created Feature Fields for Robotic Manipulation (F3RM). This innovative system seamlessly integrates 2D images with fundamental model attributes to build 3D scenes, thereby assisting robots in identifying and gripping objects in their vicinity. F3RM is distinguished by its proficiency in understanding unstructured language commands from humans, making it especially beneficial in practical scenarios with abundant objects, such as warehouses and households.
F3RM endows robots with the capability to interpret and act on open-ended textual instructions expressed in natural language, thereby enhancing their object manipulation skills. Consequently, these machines can comprehend less specific human requests and accomplish the intended tasks. For instance, when a user instructs the robot to “pick up a tall mug,” the robot can efficiently identify and handle an item that best matches this description.
Ge Yang, a postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL, emphasises the challenge of creating robots capable of generalising in real-world scenarios. The objective is to equip robots with flexibility akin to humans, enabling them to grasp and position objects, even when encountering them for the first time.
This method could enhance robots’ ability to pick items in busy fulfilment centres characterised by clutter and unpredictability. Robots in such warehouses are often tasked with matching textual descriptions to objects, regardless of packaging variations, to ensure accurate order shipping.
For instance, in vast online retail fulfilment centres housing millions of items, many of which may be unfamiliar to robots, F3RM’s advanced spatial and semantic perception could help robots efficiently locate, place, and package items. This efficiency benefits factory workers and enhances order shipping.
Moreover, F3RM’s versatility extends to urban and household settings, where personalised robots can identify and pick specific items. The system aids robots in understanding their physical and perceptual surroundings.
Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator, highlights the combination of advanced visual recognition and radiance fields as highly beneficial for robotic tasks, particularly those involving 3D object manipulation in various environments.
F3RM initiates its spatial understanding process by capturing images through a selfie stick-mounted camera. This camera takes 50 images from various angles, facilitating the creation of a neural radiance field (NeRF). NeRF is a deep learning technique that transforms 2D images into a 3D scene. These RGB images collectively form a comprehensive “digital twin” representation, offering a 360-degree view of the surroundings.
In addition to the intricate neural radiance field, F3RM constructs a feature field to enhance geometric data with semantic insights. The system utilises CLIP, a vision foundation model trained on a vast image dataset, to grasp visual concepts efficiently. By translating the 2D CLIP features of the images captured by the selfie stick into a 3D format, F3RM effectively elevates these features into a three-dimensional representation.
After receiving a few demonstrations, the robot leverages its knowledge of geometry and semantics to grasp unfamiliar objects. When a user submits a text query, the robot explores various possible grasping options, selecting those with the highest likelihood of picking up the requested object. Each option’s score is based on its relevance to the prompt, its similarity to the robot’s training demonstrations, and whether it avoids collisions. The grasp with the highest score is then executed.
F3RM also allows users to specify the desired object in various levels of detail using natural language. For instance, if there is both a metal mug and a glass mug, the user can request the “glass mug.” Even when multiple identical objects are present, such as two glass mugs, one filled with coffee and the other with juice, the user can specify the “glass mug with coffee.” The feature field’s embedded foundation model features facilitate this open-ended understanding.