0


【Timm】create_model所提供的ViT模型概览

**⚪查看代码:python xxx.py **

import timm

if __name__ == '__main__':
    model_vit = timm.list_models('*vit*')
    print(len(model_vit),model_vit[:])

⚪结合vision transformer理解

7 ResNets:

  • R50x1, R50x2 R101x1, R152x1, R152x2, pre-trained for 7 epochs,
  • plus R152x2 and R200x3 pre-trained for 14 epochs;

**6 Vision Transformers: **

  • ViT-B/32, B/16, L/32, L/16, pre-trained for 7 epochs,
  • plus L/16 and H/14 pre-trained for 14 epochs;

5 hybrids,

  • R50+ViT-B/32, B/16, L/32, L/16 pretrained for 7 epochs,
  • plus R50+ViT-L/16 pre-trained for 14 epochs

参数解读:

  • 以ViT-L/16为例,表示ViT Large模型,对应patch_size为16。
  • 但是,混合模型的数值不是对应patch_size,而是ResNet的总取样率。
  • 采样:模拟信号进行取样时的快慢次数
  • 这里就能对Timm库所提供的预训练模型有所理解。

⚪ViT_model概览-28个

  1. 'vit_base_patch16_224',
  2. 'vit_base_patch16_224_in21k',
  3. 'vit_base_patch16_384',
  4. 'vit_base_patch32_224',
  5. 'vit_base_patch32_224_in21k',
  6. 'vit_base_patch32_384',
  7. 'vit_base_resnet26d_224',
  8. 'vit_base_resnet50_224_in21k',
  9. 'vit_base_resnet50_384',
  10. 'vit_base_resnet50d_224',
  11. 'vit_deit_base_distilled_patch16_224',
  12. 'vit_deit_base_distilled_patch16_384',
  13. 'vit_deit_base_patch16_224',
  14. 'vit_deit_base_patch16_384',
  15. 'vit_deit_small_distilled_patch16_224',
  16. 'vit_deit_small_patch16_224',
  17. 'vit_deit_tiny_distilled_patch16_224',
  18. 'vit_deit_tiny_patch16_224',
  19. 'vit_huge_patch14_224_in21k',
  20. 'vit_large_patch16_224',
  21. 'vit_large_patch16_224_in21k',
  22. 'vit_large_patch16_384',
  23. 'vit_large_patch32_224',
  24. 'vit_large_patch32_224_in21k',
  25. 'vit_large_patch32_384',
  26. 'vit_small_patch16_224',
  27. 'vit_small_resnet26d_224',
  28. 'vit_small_resnet50d_s3_224'

文章推荐:

  • Pytorch视觉模型库--timm_pytorch 模型库
  • pytorch下的迁移学习模型库·详细使用教程

本文转载自: https://blog.csdn.net/MengYa_Dream/article/details/126945820
版权归原作者 MengYa_DreamZ 所有, 如有侵权,请联系我们删除。

“【Timm】create_model所提供的ViT模型概览”的评论:

还没有评论