Recently, researchers from New York University and Facebook have proposed MDETR  or Modulated Detection for End-to-End Multi-Modal Understanding to detect objects in an image conditioned on a raw text query, like a caption or a question.

Object detection is a key component in most cutting edge multi-modal understanding systems, typically used as a black-box to detect a fixed vocabulary of concepts in an image. This popular approach to using detection systems does not account for downstream multimodal understanding tasks, and comes in the way of performance. 

Methodology

The research team included Yann LeCun, Aishwarya Kamath and Nicolas Carion from NYU and Mannat Singh, Ishan Mishra and Gabriel Synnaeve from Facebook.

“We use a pre-trained RoBERTa-base as our text encoder, having 12 transformer encoder layers, each with the hidden dimension of 768 and 12 heads in the multi-head attention. We used the implementation and weights from HuggingFace,” researchers said.

The current approach to object detection not only fails to perform downstream multi-modal comprehension tasks but is limited to only having access to the detected objects rather than the whole image. By fusing the two modalities – CNN and RoBERTa – at an early stage of the model, researchers used a transformer-based architecture to reason jointly over text and image. CNN helps in extracting the visual features, while the RoBERTa language model looks out for text features.

“The features of both modalities are projected to a shared embedding space, concatenated and fed to a transformer encoder-decoder that predicts the bounding boxes of the objects and their grounding in text,” as per the paper. They used 1.3 million text-image pairs to train the network, which came from pre-existing multi-modal datasets with clear alignment between phrases in text and objects in images.

“Our method, MDETR, is an end-to-end modulated detector based on the recent DETR detection framework and performs object detection in conjunction with natural language understanding, enabling truly end-to-end multi-modal reasoning. MDETR relies solely on text and aligned boxes as a form of supervision for concepts in an image. Thus, unlike current detection methods, MDETR detects nuanced concepts from the free-form text and generalizes to unseen combinations of categories and attributes,” as per the research team. 

Look at (a) picture with three elephants, and (b) picture with three elephants, but coloured. 

Applying traditional models after training them with millions of images will identify only the elephants in the image (a) and (b), but not the basic features of the creature present in the picture. Applying MDETR, scientists provided input with the query – “A pink elephant” to the model. The output clearly relates the feature to the particular elephant and provides the outcome in the textual format as well. The model was neither trained nor has ever seen pink or blue elephants during training. 

To assess MDETR’s results, the researchers ran tests on the CLEVR dataset. The convolutional backbone used a ResNet-18 model pre-trained on ImageNet, a pre-trained DistilRoberta as a text-encoder, and the same final transformer as DETR.

“We presented MDETR, a fully differentiable modulated detector. We established its strong performance on multi-modal understanding tasks on a variety of datasets and demonstrated its potential in other downstream applications such as few-shot detection and visual question answering. We hope that this work opens up new opportunities to develop fully integrated multi-modal architectures without relying on black-box object detectors,” researchers said.

The post Yann LeCun’s Latest Work Upgrades Popular Object Detection Models appeared first on Analytics India Magazine.