Getting your Trinity Audio player ready...
|
MIT and a collaborative research initiative have developed an efficient computer vision model that enables autonomous vehicles to rapidly and accurately recognise objects, even in high-resolution images. This model reduces computational complexity, allowing real-time semantic segmentation on devices with limited hardware resources, like those used in autonomous vehicles, for quick decision-making.
Recent cutting-edge semantic segmentation models directly capture pixel interactions in images, causing computational demands to increase exponentially with image resolution, limiting real-time processing on edge devices like sensors or mobile phones.
MIT researchers have introduced a novel building block for semantic segmentation models, matching the capabilities of models but with linear computational complexity and hardware-friendly operations. As a result, this new model series enhances high-resolution computer vision, achieving up to nine times faster performance on mobile devices while maintaining or surpassing accuracy levels.
Beyond aiding real-time decisions in autonomous vehicles, this technique has potential applications to enhance efficiency in other high-resolution computer vision tasks, including medical image segmentation.
According to Song Han, a senior author of the paper and an associate professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT, although traditional vision transformers have been in use for a significant period and yield impressive results, they aim to draw attention to the efficiency dimension of these models. Their research demonstrates the feasibility of significantly reducing computational requirements, enabling real-time image segmentation to occur locally on a device.
Han acknowledged that categorising each pixel in a high-resolution image with potentially millions of pixels poses an intricate challenge for machine learning models. He explained that recently, a highly effective model called a vision transformer has emerged, proving its effectiveness in addressing this challenge.
Initially designed for natural language processing, transformers represent each word in a sentence as a token and then create an attention map to capture the relationships between all permits, facilitating contextual understanding during predictions. Similarly, a vision transformer applies this concept to images by dividing them into patches of pixels and encoding each patch into a token, subsequently generating an attention map.
This attention map relies on a similarity function to directly learn pixel interactions, resulting in a global receptive field that allows the model to access all relevant parts of the image. However, when dealing with high-resolution images comprising millions of pixels organised into thousands of patches, the attention map becomes exceedingly large, leading to quadratic growth in computational demands as image resolution increases.
In their novel model series, known as EfficientViT, the MIT researchers simplified the creation of the attention map by substituting the nonlinear similarity function with a linear one. This alteration allowed them to rearrange the order of operations, reducing the overall computational workload without altering functionality and sacrificing the global receptive field. Consequently, their model exhibits linear growth in computation requirements as image resolution increases.
However, this linear attention approach primarily captures global image context, leading to a decline in accuracy due to the loss of local information. The researchers integrated two additional components into their model to address this accuracy loss, each incurring minimal computational overhead. One of these elements assists in capturing local feature interactions, compensating for the linear function’s limitations in local information extraction.
The second element, a module enabling multiscale learning, facilitates recognising large and small objects. The researchers emphasised the delicate balance between performance and efficiency in their design. EfficientViT is engineered with a hardware-friendly architecture, making it suitable for deployment on various devices, such as virtual reality headsets and edge computers in autonomous vehicles. Moreover, this model can be applied to diverse computer vision tasks, including image classification.
In tests on semantic segmentation datasets, the researchers found their model, EfficientViT, performed up to nine times faster on Nvidia GPUs compared to other popular vision transformer models, maintaining or surpassing accuracy. This advancement enables the model to run efficiently on mobile and cloud devices. The researchers plan to extend this technique to accelerate generative machine-learning models and develop EfficientViT for various vision-related tasks.