ver
				 
			stringlengths 5 
			18 
			 | type
				 
			stringclasses 15
				values  | input_size
				 
			int64 224 
			600 
			 | url
				 
			stringlengths 54 
			74 
			 | 
|---|---|---|---|
	alexnet 
 | 
	alexnet 
 | 224 
							 | 
	https://download.pytorch.org/models/alexnet-owt-7be5be79.pth 
 | 
					
	convnext_tiny 
 | 
	convnext 
 | 224 
							 | 
	https://download.pytorch.org/models/convnext_tiny-983f1562.pth 
 | 
					
	convnext_small 
 | 
	convnext 
 | 224 
							 | 
	https://download.pytorch.org/models/convnext_small-0c510722.pth 
 | 
					
	convnext_base 
 | 
	convnext 
 | 224 
							 | 
	https://download.pytorch.org/models/convnext_base-6075fbad.pth 
 | 
					
	convnext_large 
 | 
	convnext 
 | 224 
							 | 
	https://download.pytorch.org/models/convnext_large-ea097f82.pth 
 | 
					
	densenet121 
 | 
	densenet 
 | 224 
							 | 
	https://download.pytorch.org/models/densenet121-a639ec97.pth 
 | 
					
	densenet161 
 | 
	densenet 
 | 224 
							 | 
	https://download.pytorch.org/models/densenet161-8d451a50.pth 
 | 
					
	densenet169 
 | 
	densenet 
 | 224 
							 | 
	https://download.pytorch.org/models/densenet169-b2777c0a.pth 
 | 
					
	densenet201 
 | 
	densenet 
 | 224 
							 | 
	https://download.pytorch.org/models/densenet201-c1103571.pth 
 | 
					
	efficientnet_b0 
 | 
	efficientnet 
 | 224 
							 | 
	https://download.pytorch.org/models/efficientnet_b0_rwightman-7f5810bc.pth 
 | 
					
	efficientnet_b1 
 | 
	efficientnet 
 | 240 
							 | 
	https://download.pytorch.org/models/efficientnet_b1_rwightman-bac287d4.pth 
 | 
					
	efficientnet_b2 
 | 
	efficientnet 
 | 288 
							 | 
	https://download.pytorch.org/models/efficientnet_b2_rwightman-c35c1473.pth 
 | 
					
	efficientnet_b3 
 | 
	efficientnet 
 | 300 
							 | 
	https://download.pytorch.org/models/efficientnet_b3_rwightman-b3899882.pth 
 | 
					
	efficientnet_b4 
 | 
	efficientnet 
 | 380 
							 | 
	https://download.pytorch.org/models/efficientnet_b4_rwightman-23ab8bcd.pth 
 | 
					
	efficientnet_b5 
 | 
	efficientnet 
 | 456 
							 | 
	https://download.pytorch.org/models/efficientnet_b5_lukemelas-1a07897c.pth 
 | 
					
	efficientnet_b6 
 | 
	efficientnet 
 | 528 
							 | 
	https://download.pytorch.org/models/efficientnet_b6_lukemelas-24a108a5.pth 
 | 
					
	efficientnet_b7 
 | 
	efficientnet 
 | 600 
							 | 
	https://download.pytorch.org/models/efficientnet_b7_lukemelas-c5b4e57e.pth 
 | 
					
	efficientnet_v2_s 
 | 
	efficientnet 
 | 384 
							 | 
	https://download.pytorch.org/models/efficientnet_v2_s-dd5fe13b.pth 
 | 
					
	efficientnet_v2_m 
 | 
	efficientnet 
 | 480 
							 | 
	https://download.pytorch.org/models/efficientnet_v2_m-dc08266a.pth 
 | 
					
	efficientnet_v2_l 
 | 
	efficientnet 
 | 480 
							 | 
	https://download.pytorch.org/models/efficientnet_v2_l-59c71312.pth 
 | 
					
	googlenet 
 | 
	googlenet 
 | 224 
							 | 
	https://download.pytorch.org/models/googlenet-1378be20.pth 
 | 
					
	inception_v3 
 | 
	googlenet 
 | 299 
							 | 
	https://download.pytorch.org/models/inception_v3_google-0cc3c7bd.pth 
 | 
					
	maxvit_t 
 | 
	maxvit 
 | 224 
							 | 
	https://download.pytorch.org/models/maxvit_t-bc5ab103.pth 
 | 
					
	mnasnet0_5 
 | 
	mnasnet 
 | 224 
							 | 
	https://download.pytorch.org/models/mnasnet0.5_top1_67.823-3ffadce67e.pth 
 | 
					
	mnasnet0_75 
 | 
	mnasnet 
 | 224 
							 | 
	https://download.pytorch.org/models/mnasnet0_75-7090bc5f.pth 
 | 
					
	mnasnet1_0 
 | 
	mnasnet 
 | 224 
							 | 
	https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth 
 | 
					
	mnasnet1_3 
 | 
	mnasnet 
 | 224 
							 | 
	https://download.pytorch.org/models/mnasnet1_3-a4c69d6f.pth 
 | 
					
	mobilenet_v2 
 | 
	mobilenet 
 | 224 
							 | 
	https://download.pytorch.org/models/mobilenet_v2-b0353104.pth 
 | 
					
	mobilenet_v3_large 
 | 
	mobilenet 
 | 224 
							 | 
	https://download.pytorch.org/models/mobilenet_v3_large-8738ca79.pth 
 | 
					
	mobilenet_v3_small 
 | 
	mobilenet 
 | 224 
							 | 
	https://download.pytorch.org/models/mobilenet_v3_small-047dcff4.pth 
 | 
					
	regnet_y_400mf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_400mf-c65dace8.pth 
 | 
					
	regnet_y_800mf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_800mf-1b27b58c.pth 
 | 
					
	regnet_y_1_6gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_1_6gf-b11a554e.pth 
 | 
					
	regnet_y_3_2gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_3_2gf-b5a9779c.pth 
 | 
					
	regnet_y_8gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_8gf-d0d0e4a8.pth 
 | 
					
	regnet_y_16gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_16gf-9e6ed7dd.pth 
 | 
					
	regnet_y_32gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_y_32gf-4dee3f7a.pth 
 | 
					
	regnet_x_400mf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_400mf-adf1edd5.pth 
 | 
					
	regnet_x_800mf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_800mf-ad17e45c.pth 
 | 
					
	regnet_x_1_6gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_1_6gf-e3633e7f.pth 
 | 
					
	regnet_x_3_2gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_3_2gf-f342aeae.pth 
 | 
					
	regnet_x_8gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_8gf-03ceed89.pth 
 | 
					
	regnet_x_16gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_16gf-2007eb11.pth 
 | 
					
	regnet_x_32gf 
 | 
	regnet 
 | 224 
							 | 
	https://download.pytorch.org/models/regnet_x_32gf-9d47f8d0.pth 
 | 
					
	resnet18 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnet18-f37072fd.pth 
 | 
					
	resnet34 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnet34-b627a593.pth 
 | 
					
	resnet50 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnet50-0676ba61.pth 
 | 
					
	resnet101 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnet101-63fe2227.pth 
 | 
					
	resnet152 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnet152-394f9c45.pth 
 | 
					
	resnext50_32x4d 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth 
 | 
					
	resnext101_32x8d 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth 
 | 
					
	resnext101_64x4d 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/resnext101_64x4d-173b62eb.pth 
 | 
					
	wide_resnet50_2 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth 
 | 
					
	wide_resnet101_2 
 | 
	resnet 
 | 224 
							 | 
	https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth 
 | 
					
	shufflenet_v2_x0_5 
 | 
	shufflenet 
 | 224 
							 | 
	https://download.pytorch.org/models/shufflenetv2_x0.5-f707e7126e.pth 
 | 
					
	shufflenet_v2_x1_0 
 | 
	shufflenet 
 | 224 
							 | 
	https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth 
 | 
					
	shufflenet_v2_x1_5 
 | 
	shufflenet 
 | 224 
							 | 
	https://download.pytorch.org/models/shufflenetv2_x1_5-3c479a10.pth 
 | 
					
	shufflenet_v2_x2_0 
 | 
	shufflenet 
 | 224 
							 | 
	https://download.pytorch.org/models/shufflenetv2_x2_0-8be3c8ee.pth 
 | 
					
	squeezenet1_0 
 | 
	squeezenet 
 | 224 
							 | 
	https://download.pytorch.org/models/squeezenet1_0-b66bff10.pth 
 | 
					
	squeezenet1_1 
 | 
	squeezenet 
 | 224 
							 | 
	https://download.pytorch.org/models/squeezenet1_1-b8a52dc0.pth 
 | 
					
	swin_t 
 | 
	swin_transformer 
 | 224 
							 | 
	https://download.pytorch.org/models/swin_t-704ceda3.pth 
 | 
					
	swin_s 
 | 
	swin_transformer 
 | 224 
							 | 
	https://download.pytorch.org/models/swin_s-5e29d889.pth 
 | 
					
	swin_b 
 | 
	swin_transformer 
 | 224 
							 | 
	https://download.pytorch.org/models/swin_b-68c6b09e.pth 
 | 
					
	swin_v2_t 
 | 
	swin_transformer 
 | 256 
							 | 
	https://download.pytorch.org/models/swin_v2_t-b137f0e2.pth 
 | 
					
	swin_v2_s 
 | 
	swin_transformer 
 | 256 
							 | 
	https://download.pytorch.org/models/swin_v2_s-637d8ceb.pth 
 | 
					
	swin_v2_b 
 | 
	swin_transformer 
 | 256 
							 | 
	https://download.pytorch.org/models/swin_v2_b-781e5279.pth 
 | 
					
	vgg11 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg11-8a719046.pth 
 | 
					
	vgg11_bn 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg11_bn-6002323d.pth 
 | 
					
	vgg13 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg13-19584684.pth 
 | 
					
	vgg13_bn 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg13_bn-abd245e5.pth 
 | 
					
	vgg16 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg16-397923af.pth 
 | 
					
	vgg16_bn 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg16_bn-6c64b313.pth 
 | 
					
	vgg19 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg19-dcbb9e9d.pth 
 | 
					
	vgg19_bn 
 | 
	vgg 
 | 224 
							 | 
	https://download.pytorch.org/models/vgg19_bn-c79401a0.pth 
 | 
					
	vit_b_16 
 | 
	vit 
 | 224 
							 | 
	https://download.pytorch.org/models/vit_b_16-c867db91.pth 
 | 
					
	vit_b_32 
 | 
	vit 
 | 224 
							 | 
	https://download.pytorch.org/models/vit_b_32-d86f8d99.pth 
 | 
					
	vit_l_16 
 | 
	vit 
 | 224 
							 | 
	https://download.pytorch.org/models/vit_l_16-852ce7e3.pth 
 | 
					
	vit_l_32 
 | 
	vit 
 | 224 
							 | 
	https://download.pytorch.org/models/vit_l_32-c7638314.pth 
 | 
					
Dataset Card for "monetjoe/cv_backbones"
This repository consolidates the collection of backbone networks for pre-trained computer vision models available on the PyTorch official website. It mainly includes various Convolutional Neural Networks (CNNs) and Vision Transformer models pre-trained on the ImageNet1K dataset. The entire collection is divided into two subsets, V1 and V2, encompassing multiple classic and advanced versions of visual models. These pre-trained backbone networks provide users with a robust foundation for transfer learning in tasks such as image recognition, object detection, and image segmentation. Simultaneously, it offers a convenient choice for researchers and practitioners to flexibly apply these pre-trained models in different scenarios.
Data structure
| ver | type | input_size | url | 
|---|---|---|---|
| backbone name | backbone type | input image size | url of pretrained model .pth file | 
Usage
ImageNet V1
from datasets import load_dataset
backbones = load_dataset("monetjoe/cv_backbones", name="default", split="train")
for weights in backbones:
    print(weights)
ImageNet V2
from datasets import load_dataset
backbones = load_dataset("monetjoe/cv_backbones", name="default", split="test")
for weights in backbones:
    print(weights)
Maintenance
git clone git@hf.co:datasets/monetjoe/cv_backbones
cd cv_backbones
Update tool
https://huggingface.co/spaces/monetjoe/cv_backbones
Param counts of different backbones
IMAGENET1K_V1
| Backbone | Params(M) | 
|---|---|
| SqueezeNet1_0 | 1.2 | 
| SqueezeNet1_1 | 1.2 | 
| ShuffleNet_V2_X0_5 | 1.4 | 
| MNASNet0_5 | 2.2 | 
| ShuffleNet_V2_X1_0 | 2.3 | 
| MobileNet_V3_Small | 2.5 | 
| MNASNet0_75 | 3.2 | 
| MobileNet_V2 | 3.5 | 
| ShuffleNet_V2_X1_5 | 3.5 | 
| RegNet_Y_400MF | 4.3 | 
| MNASNet1_0 | 4.4 | 
| EfficientNet_B0 | 5.3 | 
| MobileNet_V3_Large | 5.5 | 
| RegNet_X_400MF | 5.5 | 
| MNASNet1_3 | 6.3 | 
| RegNet_Y_800MF | 6.4 | 
| GoogLeNet | 6.6 | 
| RegNet_X_800MF | 7.3 | 
| ShuffleNet_V2_X2_0 | 7.4 | 
| EfficientNet_B1 | 7.8 | 
| DenseNet121 | 8 | 
| EfficientNet_B2 | 9.1 | 
| RegNet_X_1_6GF | 9.2 | 
| RegNet_Y_1_6GF | 11.2 | 
| ResNet18 | 11.7 | 
| EfficientNet_B3 | 12.2 | 
| DenseNet169 | 14.1 | 
| RegNet_X_3_2GF | 15.3 | 
| EfficientNet_B4 | 19.3 | 
| RegNet_Y_3_2GF | 19.4 | 
| DenseNet201 | 20 | 
| EfficientNet_V2_S | 21.5 | 
| ResNet34 | 21.8 | 
| ResNeXt50_32X4D | 25 | 
| ResNet50 | 25.6 | 
| Inception_V3 | 27.2 | 
| Swin_T | 28.3 | 
| Swin_V2_T | 28.4 | 
| ConvNeXt_Tiny | 28.6 | 
| DenseNet161 | 28.7 | 
| EfficientNet_B5 | 30.4 | 
| MaxVit_T | 30.9 | 
| RegNet_Y_8GF | 39.4 | 
| RegNet_X_8GF | 39.6 | 
| EfficientNet_B6 | 43 | 
| ResNet101 | 44.5 | 
| Swin_S | 49.6 | 
| Swin_V2_S | 49.7 | 
| ConvNeXt_Small | 50.2 | 
| EfficientNet_V2_M | 54.1 | 
| RegNet_X_16GF | 54.3 | 
| ResNet152 | 60.2 | 
| AlexNet | 61.1 | 
| EfficientNet_B7 | 66.3 | 
| Wide_ResNet50_2 | 68.9 | 
| ResNeXt101_64X4D | 83.5 | 
| RegNet_Y_16GF | 83.6 | 
| ViT_B_16 | 86.6 | 
| Swin_B | 87.8 | 
| Swin_V2_B | 87.9 | 
| ViT_B_32 | 88.2 | 
| ConvNeXt_Base | 88.6 | 
| ResNeXt101_32X8D | 88.8 | 
| RegNet_X_32GF | 107.8 | 
| EfficientNet_V2_L | 118.5 | 
| Wide_ResNet101_2 | 126.9 | 
| VGG11_BN | 132.9 | 
| VGG11 | 132.9 | 
| VGG13 | 133 | 
| VGG13_BN | 133.1 | 
| VGG16_BN | 138.4 | 
| VGG16 | 138.4 | 
| VGG19_BN | 143.7 | 
| VGG19 | 143.7 | 
| RegNet_Y_32GF | 145 | 
| ConvNeXt_Large | 197.8 | 
| ViT_L_16 | 304.3 | 
| ViT_L_32 | 306.5 | 
IMAGENET1K_V2
| Backbone | Params(M) | 
|---|---|
| MobileNet_V2 | 3.5 | 
| RegNet_Y_400MF | 4.3 | 
| MobileNet_V3_Large | 5.5 | 
| RegNet_X_400MF | 5.5 | 
| RegNet_Y_800MF | 6.4 | 
| RegNet_X_800MF | 7.3 | 
| EfficientNet_B1 | 7.8 | 
| RegNet_X_1_6GF | 9.2 | 
| RegNet_Y_1_6GF | 11.2 | 
| RegNet_X_3_2GF | 15.3 | 
| RegNet_Y_3_2GF | 19.4 | 
| ResNeXt50_32X4D | 25 | 
| ResNet50 | 25.6 | 
| RegNet_Y_8GF | 39.4 | 
| RegNet_X_8GF | 39.6 | 
| ResNet101 | 44.5 | 
| RegNet_X_16GF | 54.3 | 
| ResNet152 | 60.2 | 
| Wide_ResNet50_2 | 68.9 | 
| RegNet_Y_16GF | 83.6 | 
| ResNeXt101_32X8D | 88.8 | 
| RegNet_X_32GF | 107.8 | 
| Wide_ResNet101_2 | 126.9 | 
| RegNet_Y_32GF | 145 | 
Mirror
https://www.modelscope.cn/datasets/monetjoe/cv_backbones
References
- Downloads last month
 - 312