Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

1The University of Texas at Austin, 2Meta Reality Labs, 3Cruise
CVPR 2023

Our model can perform open-vocabulary segmentation with user-defined arbitrary queries.

Abstract

Open-vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision-language models, e.g., CLIP, to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP's generalization ability. Along with finetuning the entire model, we utilize the "blank" areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of CLIP, and it can further improve a fully finetuned model. In particular, when trained on COCO and evaluated on ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset specific adaptations.

Motivation

Our analysis reveals the pre-trained CLIP does not perform well on mask proposals, making it the performance bottleneck of two-stage approaches.

(a) CLIP is pre-trained with natural images with little data augmentation.

(b) Two-stage open-vocabulary semantic segmentation approaches first generate class-agnostic mask proposals and then leverage pre-trained CLIP to do open-vocabulary classification. The input of the CLIP model is cropped masked images, which have huge domain gap from the natural images.

(c) Our analysis reveals that pre-trained CLIP does not work well on masked images.


Method

Our model consists of one segmentation model, e.g., MaskFormer, and one CLIP model.

We first train the modified MaskFormer as the open-vocabulary segmentation baseline (Section 3.1). Then we collect diverse mask-category pairs from image captions (Section 3.2) and adapt CLIP for masked images (Section 3.3).

Results

For the first time, we show open-vocabulary generalist models can match the performance of supervised specialist models without dataset specific adaptations.

method backbone training dataset A-847 PC-459 A-150 PC-59 PAS-20
Open-vocabulary generalist models.
SPNet R-101 PASCAL-15 - - - 24.3 18.3
ZS3Net R-101 PASCAL-15 - - - 19.4 38.3
LSeg R-101 PASCAL-15 - - - - 47.4
LSeg+ R-101 COCO Panoptic 2.5 5.2 13.0 36.0 59.0
SimBaseline R-101c COCO-Stuff-156 - - 15.3 - 74.5
ZegFormer R-50 COCO-Stuff-156 - - 16.4 - 80.7
OpenSeg R-101 COCO Panoptic 4.0 6.5 15.3 36.9 60.0
OVSeg (Ours) R-101c COCO-Stuff-171 7.1 11.0 24.8 53.3 92.6
LSeg+ Eff-B7 COCO Panoptic 3.8 7.8 18.0 46.5 -
OpenSeg Eff-B7 COCO Panoptic 6.3 9.0 21.1 42.1 -
OVSeg (Ours) Swin-B COCO-Stuff-171 9.0 12.4 29.6 55.7 94.5
Supervised specialist models.
FCN FCN-8s Same as test - - 29.4 37.8 -
Deeplab R-101 Same as test - - - 45.7 77.7
SelfTrain Eff-L2 Same as test - - - - 90.0

BibTeX

@inproceedings{liang2023open,
  title={Open-vocabulary semantic segmentation with mask-adapted clip},
  author={Liang, Feng and Wu, Bichen and Dai, Xiaoliang and Li, Kunpeng and Zhao, Yinan and Zhang, Hang and Zhang, Peizhao and Vajda, Peter and Marculescu, Diana},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7061--7070},
  year={2023}
}