SegFormer++

Paper: Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation

image

image

Abstract

Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the SegFormer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications.

Update: It is now possible to load the model via torch.hub. See here.

Update: It is now possible to run the model without OpenMMLab dependencies, enabling users to utilize the SegFormerPlusPlus architecture without installing the full OpenMMLab framework.

Results and Models

Memory refers to the VRAM requirements during the training process.

Inference on Cityscapes (MiT-B5)

The weights of the Segformer (Original) model were used to get the inference results.

Method mIoU Speed-Up download
Segformer (Original) 82.39 - model
Segformer++HQ (ours) 82.31 1.61 model
Segformer++fast (ours) 82.04 1.94 model
Segformer++2x2 (ours) 81.96 1.90 model
Segformer (Downsampling) 77.31 6.51 model

Training on Cityscapes (MiT-B5)

Method mIoU Speed-Up Memory (GB) download
Segformer (Original) 82.39 - 48.3 model
Segformer++HQ (ours) 82.19 1.40 34.0 model
Segformer++fast (ours) 81.77 1.55 30.5 model
Segformer++2x2 (ours) 82.38 1.63 31.1 model
Segformer (Downsampling) 79.24 2.95 10.0 model

Training on ADE20K (640x640) (MiT-B5)

Method mIoU Speed-Up Memory (GB) download
Segformer (Original) 49.72 - 33.7 model
Segformer++HQ (ours) 49.77 1.15 29.2 model
Segformer++fast (ours) 49.10 1.20 28.0 model
Segformer++2x2 (ours) 49.35 1.26 27.2 model
Segformer (Downsampling) 46.71 1.89 12.4 model

Training on JBD

Method PCK@0.1 PCK@0.05 Speed-Up Memory (GB) download
Segformer (Original) 95.20 90.65 - 40.0 model
Segformer++HQ (ours) 95.18 90.51 1.19 36.0 model
Segformer++fast (ours) 94.58 89.87 1.25 34.6 model
Segformer++2x2 (ours) 95.17 90.16 1.27 33.4 model

Training on MS COCO

Method PCK@0.1 PCK@0.05 Speed-Up Memory (GB) download
Segformer (Original) 95.16 87.61 - 13.5 model
Segformer++HQ (ours) 94.97 87.35 0.97 13.1 model
Segformer++fast (ours) 95.02 87.37 0.99 12.9 model
Segformer++2x2 (ours) 94.98 87.36 1.24 12.3 model

Usage

Easy usage:

Explanation of the different token merging strategies:

Legacy Variants (e.g. as part of mmseg or mmpose) can be found on GitHub: https://github.com/KieDani/SegformerPlusPlus

Citation

@article{kienzle2024segformer++,
  title={Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation},
  author={Kienzle, Daniel and Kantonis, Marco and Sch{\"o}n, Robin and Lienhart, Rainer},
  journal={IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR)},
  year={2024}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support