0


10. 网络模型使用及修改

10.1 下载网络模型

  1. import torchvision
  2. #trauin_data = torchvision.datasets.ImageNet("./dataset",split="train",download=True,transform=torchvision.transforms.ToTensor()) # 这个数据集没有办法再公开的访问了
  3. vgg16_true = torchvision.models.vgg16(pretrained=True) # 下载卷积层对应的参数是多少、池化层对应的参数时多少,这些参数时ImageNet训练好了的
  4. vgg16_false = torchvision.models.vgg16(pretrained=False) # 没有预训练的参数
  5. print("ok")
  6. print(vgg16_true)

结果:

  1. ok
  2. VGG(
  3. (features): Sequential(
  4. (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): ReLU(inplace=True)
  6. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  7. (3): ReLU(inplace=True)
  8. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  9. (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (6): ReLU(inplace=True)
  11. (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  12. (8): ReLU(inplace=True)
  13. (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  14. (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  15. (11): ReLU(inplace=True)
  16. (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  17. (13): ReLU(inplace=True)
  18. (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  19. (15): ReLU(inplace=True)
  20. (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  21. (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  22. (18): ReLU(inplace=True)
  23. (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  24. (20): ReLU(inplace=True)
  25. (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  26. (22): ReLU(inplace=True)
  27. (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  28. (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  29. (25): ReLU(inplace=True)
  30. (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  31. (27): ReLU(inplace=True)
  32. (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  33. (29): ReLU(inplace=True)
  34. (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  35. )
  36. (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  37. (classifier): Sequential(
  38. (0): Linear(in_features=25088, out_features=4096, bias=True)
  39. (1): ReLU(inplace=True)
  40. (2): Dropout(p=0.5, inplace=False)
  41. (3): Linear(in_features=4096, out_features=4096, bias=True)
  42. (4): ReLU(inplace=True)
  43. (5): Dropout(p=0.5, inplace=False)
  44. (6): Linear(in_features=4096, out_features=1000, bias=True)
  45. )

10.2 查看函数用法

  1. import torchvision
  2. help(torchvision.models.vgg16)

结果:

  1. Help on function vgg16 in module torchvision.models.vgg:
  2. vgg16(pretrained:bool=False, progress:bool=True, **kwargs:Any) -> torchvision.models.vgg.VGG
  3. VGG 16-layer model (configuration "D")
  4. `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_.
  5. The required minimum input size of the model is 32x32.
  6. Args:
  7. pretrained (bool): If True, returns a model pre-trained on ImageNet
  8. progress (bool): If True, displays a progress bar of the download to stderr

10.3 网络模型添加

  1. import torchvision
  2. from torch import nn
  3. dataset = torchvision.datasets.CIFAR10("./dataset",train=True,transform=torchvision.transforms.ToTensor(),download=True)
  4. vgg16_true = torchvision.models.vgg16(pretrained=True) # 下载卷积层对应的参数是多少、池化层对应的参数时多少,这些参数时ImageNet训练好了的
  5. vgg16_true.add_module('add_linear',nn.Linear(1000,10)) # 在VGG16后面添加一个线性层,使得输出为适应CIFAR10的输出,CIFAR10需要输出10个种类
  6. print(vgg16_true)

结果:

  1. Files already downloaded and verified
  2. VGG(
  3. (features): Sequential(
  4. (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  5. (1): ReLU(inplace=True)
  6. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  7. (3): ReLU(inplace=True)
  8. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  9. (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  10. (6): ReLU(inplace=True)
  11. (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  12. (8): ReLU(inplace=True)
  13. (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  14. (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  15. (11): ReLU(inplace=True)
  16. (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  17. (13): ReLU(inplace=True)
  18. (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  19. (15): ReLU(inplace=True)
  20. (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  21. (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  22. (18): ReLU(inplace=True)
  23. (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  24. (20): ReLU(inplace=True)
  25. (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  26. (22): ReLU(inplace=True)
  27. (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  28. (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  29. (25): ReLU(inplace=True)
  30. (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  31. (27): ReLU(inplace=True)
  32. (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  33. (29): ReLU(inplace=True)
  34. (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  35. )
  36. (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  37. (classifier): Sequential(
  38. (0): Linear(in_features=25088, out_features=4096, bias=True)
  39. (1): ReLU(inplace=True)
  40. (2): Dropout(p=0.5, inplace=False)
  41. (3): Linear(in_features=4096, out_features=4096, bias=True)
  42. (4): ReLU(inplace=True)
  43. (5): Dropout(p=0.5, inplace=False)
  44. (6): Linear(in_features=4096, out_features=1000, bias=True)
  45. )
  46. (add_linear): Linear(in_features=1000, out_features=10, bias=True)
  47. )

10.4 网络模型修改

  1. import torchvision
  2. from torch import nn
  3. vgg16_false = torchvision.models.vgg16(pretrained=False) # 没有预训练的参数
  4. print(vgg16_false)
  5. vgg16_false.classifier[6] = nn.Linear(4096,10)
  6. print(vgg16_false)

结果:

  1. VGG(
  2. (features): Sequential(
  3. (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  4. (1): ReLU(inplace=True)
  5. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  6. (3): ReLU(inplace=True)
  7. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  8. (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  9. (6): ReLU(inplace=True)
  10. (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  11. (8): ReLU(inplace=True)
  12. (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  13. (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  14. (11): ReLU(inplace=True)
  15. (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  16. (13): ReLU(inplace=True)
  17. (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  18. (15): ReLU(inplace=True)
  19. (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  20. (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  21. (18): ReLU(inplace=True)
  22. (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  23. (20): ReLU(inplace=True)
  24. (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  25. (22): ReLU(inplace=True)
  26. (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  27. (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  28. (25): ReLU(inplace=True)
  29. (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  30. (27): ReLU(inplace=True)
  31. (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  32. (29): ReLU(inplace=True)
  33. (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  34. )
  35. (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  36. (classifier): Sequential(
  37. (0): Linear(in_features=25088, out_features=4096, bias=True)
  38. (1): ReLU(inplace=True)
  39. (2): Dropout(p=0.5, inplace=False)
  40. (3): Linear(in_features=4096, out_features=4096, bias=True)
  41. (4): ReLU(inplace=True)
  42. (5): Dropout(p=0.5, inplace=False)
  43. (6): Linear(in_features=4096, out_features=1000, bias=True)
  44. )
  45. )
  46. VGG(
  47. (features): Sequential(
  48. (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  49. (1): ReLU(inplace=True)
  50. (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  51. (3): ReLU(inplace=True)
  52. (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  53. (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  54. (6): ReLU(inplace=True)
  55. (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  56. (8): ReLU(inplace=True)
  57. (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  58. (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  59. (11): ReLU(inplace=True)
  60. (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  61. (13): ReLU(inplace=True)
  62. (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  63. (15): ReLU(inplace=True)
  64. (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  65. (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  66. (18): ReLU(inplace=True)
  67. (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  68. (20): ReLU(inplace=True)
  69. (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  70. (22): ReLU(inplace=True)
  71. (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  72. (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  73. (25): ReLU(inplace=True)
  74. (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  75. (27): ReLU(inplace=True)
  76. (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  77. (29): ReLU(inplace=True)
  78. (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  79. )
  80. (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  81. (classifier): Sequential(
  82. (0): Linear(in_features=25088, out_features=4096, bias=True)
  83. (1): ReLU(inplace=True)
  84. (2): Dropout(p=0.5, inplace=False)
  85. (3): Linear(in_features=4096, out_features=4096, bias=True)
  86. (4): ReLU(inplace=True)
  87. (5): Dropout(p=0.5, inplace=False)
  88. (6): Linear(in_features=4096, out_features=10, bias=True)
  89. )
  90. )

本文转载自: https://blog.csdn.net/qq_54932411/article/details/132513293
版权归原作者 Gosling123456 所有, 如有侵权,请联系我们删除。

“10. 网络模型使用及修改”的评论:

还没有评论