0


YOLOv5、YOLOv7改进之二十九:引入Swin Transformer v2.0版本

** 前 言:**作为当前先进的深度学习目标检测算法YOLOv7,已经集合了大量的trick,但是还是有提高和改进的空间,针对具体应用场景下的检测难点,可以不同的改进方法。此后的系列文章,将重点对YOLOv7的如何改进进行详细的介绍,目的是为了给那些搞科研的同学需要创新点或者搞工程项目的朋友需要达到更好的效果提供自己的微薄帮助和参考。由于出到YOLOv7,YOLOv5算法2020年至今已经涌现出大量改进论文,这个不论对于搞科研的同学或者已经工作的朋友来说,研究的价值和新颖度都不太够了,为与时俱进,以后改进算法以YOLOv7为基础,此前YOLOv5改进方法在YOLOv7同样适用,所以继续YOLOv5系列改进的序号。另外改进方法在YOLOv5等其他算法同样可以适用进行改进。希望能够对大家有帮助。

具体改进办法请关注后私信留言!


解决问题:

  1. 前面介绍改进YOLO算法,引入Swin transformer模块,本人在某遥感数据集上进行测试,替换主干网络后,确实有精度提升的效果,并且参数量降低了,相对于Botnet中的多头注意力机制来说,加入网络的性价比更高,也证明了Swin transformer模块的有效性和优越性。今年还出了Swin transformer 第二个版本,尝试将其中添加进YOLO系列算法中。同样加入Transformer也是为了拟补YOLO这种卷积网络缺乏长距离建模的能力,没有获取全局信息的能力,为了更好的提取目标特征信息。下面链接为引入Swin Transformer 1.0版本。

YOLOv7改进之二十五:引入Swin Transformer_人工智能算法研究院的博客-CSDN博客

原理:

论文:[2111.09883] Swin Transformer V2: Scaling Up Capacity and Resolution (arxiv.org)

  1. 相对于版本1.0来说进行了三个改进:1.将原本transformer block中的pre-norm操作替换为了post-norm操作,作者发现,将Swin Transformer模型从small size增大到large size后,网络深层的激活值会变得很大,与浅层特征的激活值有很大的gap 2.原始self-attention中对于两两特征之间的相似度衡量是用的内积,作者观察到当替换为post-norm操作后,在大模型下,某些block或者head中的attention map会被某些特征主导,为了改善这个问题,作者将内积相似度替换为了余弦相似度。3.作者首先直接使用了在256 * 256分辨率大小,8 * 8 windows大小下训练好的Swin-Transformer模型权重,载入到不同scale到大模型下。

Swin transformer v1.0版本与2.0版本对比
Swin transformer v1.0版本与2.0版本结构图对比

项目部分代码如下:

  1. class SwinTransformerBlock(nn.Module):
  2. r""" Swin Transformer Block.
  3. Args:
  4. dim (int): Number of input channels.
  5. input_resolution (tuple[int]): Input resulotion.
  6. num_heads (int): Number of attention heads.
  7. window_size (int): Window size.
  8. shift_size (int): Shift size for SW-MSA.
  9. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
  10. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  11. drop (float, optional): Dropout rate. Default: 0.0
  12. attn_drop (float, optional): Attention dropout rate. Default: 0.0
  13. drop_path (float, optional): Stochastic depth rate. Default: 0.0
  14. act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
  15. norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
  16. pretrained_window_size (int): Window size in pre-training.
  17. """
  18. def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
  19. mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0.,
  20. act_layer=nn.GELU, norm_layer=nn.LayerNorm, pretrained_window_size=0):
  21. super().__init__()
  22. self.dim = dim
  23. self.input_resolution = input_resolution
  24. self.num_heads = num_heads
  25. self.window_size = window_size
  26. self.shift_size = shift_size
  27. self.mlp_ratio = mlp_ratio
  28. if min(self.input_resolution) <= self.window_size:
  29. # if window size is larger than input resolution, we don't partition windows
  30. self.shift_size = 0
  31. self.window_size = min(self.input_resolution)
  32. assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
  33. self.norm1 = norm_layer(dim)
  34. self.attn = WindowAttention(
  35. dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
  36. qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop,
  37. pretrained_window_size=to_2tuple(pretrained_window_size))
  38. self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
  39. self.norm2 = norm_layer(dim)
  40. mlp_hidden_dim = int(dim * mlp_ratio)
  41. self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
  42. if self.shift_size > 0:
  43. # calculate attention mask for SW-MSA
  44. H, W = self.input_resolution
  45. img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
  46. h_slices = (slice(0, -self.window_size),
  47. slice(-self.window_size, -self.shift_size),
  48. slice(-self.shift_size, None))
  49. w_slices = (slice(0, -self.window_size),
  50. slice(-self.window_size, -self.shift_size),
  51. slice(-self.shift_size, None))
  52. cnt = 0
  53. for h in h_slices:
  54. for w in w_slices:
  55. img_mask[:, h, w, :] = cnt
  56. cnt += 1
  57. mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
  58. mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
  59. attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
  60. attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
  61. else:
  62. attn_mask = None
  63. self.register_buffer("attn_mask", attn_mask)
  64. def forward(self, x):
  65. H, W = self.input_resolution
  66. B, L, C = x.shape
  67. assert L == H * W, "input feature has wrong size"
  68. shortcut = x
  69. x = x.view(B, H, W, C)
  70. # cyclic shift
  71. if self.shift_size > 0:
  72. shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
  73. else:
  74. shifted_x = x
  75. # partition windows
  76. x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
  77. x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
  78. # W-MSA/SW-MSA
  79. attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
  80. # merge windows
  81. attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
  82. shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
  83. # reverse cyclic shift
  84. if self.shift_size > 0:
  85. x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
  86. else:
  87. x = shifted_x
  88. x = x.view(B, H * W, C)
  89. x = shortcut + self.drop_path(self.norm1(x))
  90. # FFN
  91. x = x + self.drop_path(self.norm2(self.mlp(x)))
  92. return x
  93. def extra_repr(self) -> str:
  94. return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
  95. f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
  96. def flops(self):
  97. flops = 0
  98. H, W = self.input_resolution
  99. # norm1
  100. flops += self.dim * H * W
  101. # W-MSA/SW-MSA
  102. nW = H * W / self.window_size / self.window_size
  103. flops += nW * self.attn.flops(self.window_size * self.window_size)
  104. # mlp
  105. flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
  106. # norm2
  107. flops += self.dim * H * W
  108. return flops

结 果:本人在遥感数据集上进行实验,有涨点效果。需要请关注留言。

预告一下:下一篇内容将继续分享深度学习算法相关改进方法。有兴趣的朋友可以关注一下我,有问题可以留言或者私聊我哦

PS:该方法不仅仅是适用改进YOLOv5,也可以改进其他的YOLO网络以及目标检测网络,比如YOLOv7、v6、v4、v3,Faster rcnn ,ssd等。

最后,希望能互粉一下,做个朋友,一起学习交流。


本文转载自: https://blog.csdn.net/m0_70388905/article/details/127214397
版权归原作者 人工智能算法研究院 所有, 如有侵权,请联系我们删除。

“YOLOv5、YOLOv7改进之二十九:引入Swin Transformer v2.0版本”的评论:

还没有评论