**⚪查看代码:python xxx.py **
import timm
if __name__ == '__main__':
model_vit = timm.list_models('*vit*')
print(len(model_vit),model_vit[:])
⚪结合vision transformer理解
7 ResNets:
- R50x1, R50x2 R101x1, R152x1, R152x2, pre-trained for 7 epochs,
- plus R152x2 and R200x3 pre-trained for 14 epochs;
**6 Vision Transformers: **
- ViT-B/32, B/16, L/32, L/16, pre-trained for 7 epochs,
- plus L/16 and H/14 pre-trained for 14 epochs;
5 hybrids,
- R50+ViT-B/32, B/16, L/32, L/16 pretrained for 7 epochs,
- plus R50+ViT-L/16 pre-trained for 14 epochs
参数解读:
- 以ViT-L/16为例,表示ViT Large模型,对应patch_size为16。
- 但是,混合模型的数值不是对应patch_size,而是ResNet的总取样率。
- 采样:模拟信号进行取样时的快慢次数
- 这里就能对Timm库所提供的预训练模型有所理解。
⚪ViT_model概览-28个
- 'vit_base_patch16_224',
- 'vit_base_patch16_224_in21k',
- 'vit_base_patch16_384',
- 'vit_base_patch32_224',
- 'vit_base_patch32_224_in21k',
- 'vit_base_patch32_384',
- 'vit_base_resnet26d_224',
- 'vit_base_resnet50_224_in21k',
- 'vit_base_resnet50_384',
- 'vit_base_resnet50d_224',
- 'vit_deit_base_distilled_patch16_224',
- 'vit_deit_base_distilled_patch16_384',
- 'vit_deit_base_patch16_224',
- 'vit_deit_base_patch16_384',
- 'vit_deit_small_distilled_patch16_224',
- 'vit_deit_small_patch16_224',
- 'vit_deit_tiny_distilled_patch16_224',
- 'vit_deit_tiny_patch16_224',
- 'vit_huge_patch14_224_in21k',
- 'vit_large_patch16_224',
- 'vit_large_patch16_224_in21k',
- 'vit_large_patch16_384',
- 'vit_large_patch32_224',
- 'vit_large_patch32_224_in21k',
- 'vit_large_patch32_384',
- 'vit_small_patch16_224',
- 'vit_small_resnet26d_224',
- 'vit_small_resnet50d_s3_224'
文章推荐:
- Pytorch视觉模型库--timm_pytorch 模型库
- pytorch下的迁移学习模型库·详细使用教程
本文转载自: https://blog.csdn.net/MengYa_Dream/article/details/126945820
版权归原作者 MengYa_DreamZ 所有, 如有侵权,请联系我们删除。
版权归原作者 MengYa_DreamZ 所有, 如有侵权,请联系我们删除。