0


[ 注意力机制 ] 经典网络模型2——CBAM 详解与复现

🤵 Author :Horizon Max

编程技巧篇:各种操作小结

🎇 机器视觉篇:会变魔术 OpenCV

💥 深度学习篇:简单入门 PyTorch

🏆 神经网络篇:经典网络模型

💻 算法篇:再忙也别忘了 LeetCode

[ 注意力机制 ] 经典网络模型2——CBAM 详解与复现

🚀 Convolutional Block Attention Module

Convolutional Block Attention Module 简称

  1. CBAM

,Sanghyun等人于2018年提出的一种新的 卷积注意力模块

创新提出了 通道注意力与空间注意力融合 的注意力机制 ;

对前馈卷积神经网络 是一个 简单而有效的 注意力模块 ;

因为它的 轻量级和通用性 ,可以 无缝集成到任何CNN网络 当中 ;

作者实验表明,不同的模型在 分类和检测性能 上都有持续的提高 ;

🔗 论文地址:CBAM: Convolutional Block Attention Module

🚀 CBAM 详解

🎨 背景知识

为提高

  1. CNN性能

,最近的研究主要研究了网络的三个重要因素:

  1. depth(深度)

,

  1. width(宽度)

,

  1. cardinality(基数)

从20世纪90年代 LeNet 网络的提出,网络的

  1. 深度

不断增加;
后来 VGG 网络表明,

  1. 相同形状的块堆叠

效果良好;
GoogLeNet 网络的提出,提出

  1. 宽度

也是提高模型性能的另一个重要因素;
同样的,ResNet

  1. 残差块

以相同拓扑与跳跃式连接堆叠在一起,构建了一个非常深的架构,达到了不错的效果;
XceptionResNeXt 网络表明,增加网络

  1. 基数

不仅减少了参数量,而且比另 两个因素(深度和宽度) 具有更强的表示能力;

除了这些因素之外,作者还研究了网络设计的另一个方面——

  1. 注意力


“注意力” 也是 人类视觉系统 的一个很有趣的地方 ;
通过注意力机制来增加网络的表征力:关注重要特征,抑制不必要特征

卷积运算是通过将 跨通道信息和空间信息混合 在一起来提取信息特征的 ;
因此提出了 CBAM 来强调通道轴和空间轴这两个主要维度上的有意义特征 ;
并对此依次应用了

  1. Channel Attention Module (通道注意模块)

  1. Spatial Attention Module (空间注意模块)

Convolutional Block Attention Module

CBAM

🎨 论文贡献

(1)提出了一个简单而有效的注意力模块(CBAM),可以广泛应用于提高 CNN 的表示能力 ;
(2)通过广泛的消融研究来验证我们的注意力模块的有效性 ;
(3)通过插入轻量级模块(CBAM),验证了各种网络的性能在多个基准(ImageNet-1K、MS COCO和VOC 2007)上都得到了极大的提高;

假设

  1. 输入特征图

为 : F ∈ R CxHxW ;
利用

  1. CBAM

依此推导出

  1. 一维通道注意图

: Mc ∈ R Cx1x1 和

  1. 二维空间注意图

: Ms ∈ R 1xHxW ;
总的注意过程可以概括为 :
F

🎨 Convolutional Block Attention Module

attention moduel

🚩 Channel Attention Module

利用

  1. 特征间的通道关系

来生成通道注意图 ;

由于feature map的每个channel都被认为是

  1. 一个feature检测器

,因此 channel 的注意力集中在

  1. 给定输入图像的 "什么" 是有意义的


为了有效地计算通道注意力,采用

  1. 压缩输入特征映射的空间维度

的方法 ;
文中同时使用

  1. AvgPool (平均池化)

  1. MaxPool (最大池化)

的方法,并证明了这种做法比单独使用一种池化方法更具有表征力;

channel attention module
式中,σ 为

  1. sigmoid

函数 ,W0 ∈ RC/r×C ,W1 ∈ RC×C/r ,MLP的权重 W0 和 W1 共享,在W0 前是

  1. ReLU

激活函数 ;

🚩 Spatial Attention Module

利用

  1. 特征间的空间关系

生成空间注意图 ;

与通道注意模块不同的是,空间注意模块关注的是

  1. 信息部分 "在哪里"

,作为通道注意模块的补充 ;
为了计算空间注意力,首先沿着通道轴应用

  1. 平均池化和最大池化

操作,并将它们连接起来以生成一个有效的

  1. 特征描述符

使用两个池化操作聚合一个feature map的通道信息,生成两个2D maps :
Fsavg ∈ R1×H×W 和 Fsmax ∈ R1×H×W ;
每个都表示通道的

  1. 平均池化特性

  1. 最大池化特性

,然后利用一个标准的卷积层进行连接和卷积操作,得到二维空间注意力图 ;

spatial attention module
式中,σ 为

  1. sigmoid

函数 ,f 7x7 为 7 x 7 大小的卷积核 ;

🚩 CBAM 的应用

CBAMResNet

以上是将

  1. CBAM

结合

  1. ResBlock

应用于ResNet中 ;
两个模块可以以并行或顺序的方式放置,实验测试发现

  1. 顺序排列

  1. 并行排列

有更好的结果 ;

error

最后,分别使用 ResNet50ResNet50+SENetResNet50+CBAM 进行实验得到可视化结果 :

compare

实验表明 CBAM 性能超越了 SENet

🚀 CBAM 复现

这里实现的是

  1. CBAM-ResNet

系列网络 :

  1. # Here is the code :import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. from torchinfo import summary
  5. classChannelAttention(nn.Module):# Channel Attention Moduledef__init__(self, in_planes):super(ChannelAttention, self).__init__()
  6. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  7. self.max_pool = nn.AdaptiveMaxPool2d(1)
  8. self.fc1 = nn.Conv2d(in_planes, in_planes //16, kernel_size=1, bias=False)
  9. self.relu = nn.ReLU()
  10. self.fc2 = nn.Conv2d(in_planes //16, in_planes, kernel_size=1, bias=False)
  11. self.sigmoid = nn.Sigmoid()defforward(self, x):
  12. avg_out = self.avg_pool(x)
  13. avg_out = self.fc1(avg_out)
  14. avg_out = self.relu(avg_out)
  15. avg_out = self.fc2(avg_out)
  16. max_out = self.max_pool(x)
  17. max_out = self.fc1(max_out)
  18. max_out = self.relu(max_out)
  19. max_out = self.fc2(max_out)
  20. out = avg_out + max_out
  21. out = self.sigmoid(out)return out
  22. classSpatialAttention(nn.Module):# Spatial Attention Moduledef__init__(self):super(SpatialAttention, self).__init__()
  23. self.conv1 = nn.Conv2d(2,1, kernel_size=7, padding=3, bias=False)
  24. self.sigmoid = nn.Sigmoid()defforward(self, x):
  25. avg_out = torch.mean(x, dim=1, keepdim=True)
  26. max_out, _ = torch.max(x, dim=1, keepdim=True)
  27. out = torch.cat([avg_out, max_out], dim=1)
  28. out = self.conv1(out)
  29. out = self.sigmoid(out)return out
  30. classBasicBlock(nn.Module):# 左侧的 residual block 结构(18-layer34-layer
  31. expansion =1def__init__(self, in_planes, planes, stride=1):# 两层卷积 Conv2d + Shutcutssuper(BasicBlock, self).__init__()
  32. self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3,
  33. stride=stride, padding=1, bias=False)
  34. self.bn1 = nn.BatchNorm2d(planes)
  35. self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
  36. stride=1, padding=1, bias=False)
  37. self.bn2 = nn.BatchNorm2d(planes)
  38. self.channel = ChannelAttention(self.expansion*planes)# Channel Attention Module
  39. self.spatial = SpatialAttention()# Spatial Attention Module
  40. self.shortcut = nn.Sequential()if stride !=1or in_planes != self.expansion*planes:# Shutcuts用于构建 Conv Block Identity Block
  41. self.shortcut = nn.Sequential(
  42. nn.Conv2d(in_planes, self.expansion*planes,
  43. kernel_size=1, stride=stride, bias=False),
  44. nn.BatchNorm2d(self.expansion*planes))defforward(self, x):
  45. out = F.relu(self.bn1(self.conv1(x)))
  46. out = self.bn2(self.conv2(out))
  47. CBAM_Cout = self.channel(out)
  48. out = out * CBAM_Cout
  49. CBAM_Sout = self.spatial(out)
  50. out = out * CBAM_Sout
  51. out += self.shortcut(x)
  52. out = F.relu(out)return out
  53. classBottleneck(nn.Module):# 右侧的 residual block 结构(50-layer101-layer152-layer
  54. expansion =4def__init__(self, in_planes, planes, stride=1):# 三层卷积 Conv2d + Shutcutssuper(Bottleneck, self).__init__()
  55. self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
  56. self.bn1 = nn.BatchNorm2d(planes)
  57. self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
  58. stride=stride, padding=1, bias=False)
  59. self.bn2 = nn.BatchNorm2d(planes)
  60. self.conv3 = nn.Conv2d(planes, self.expansion*planes,
  61. kernel_size=1, bias=False)
  62. self.bn3 = nn.BatchNorm2d(self.expansion*planes)
  63. self.channel = ChannelAttention(self.expansion*planes)# Channel Attention Module
  64. self.spatial = SpatialAttention()# Spatial Attention Module
  65. self.shortcut = nn.Sequential()if stride !=1or in_planes != self.expansion*planes:# Shutcuts用于构建 Conv Block Identity Block
  66. self.shortcut = nn.Sequential(
  67. nn.Conv2d(in_planes, self.expansion*planes,
  68. kernel_size=1, stride=stride, bias=False),
  69. nn.BatchNorm2d(self.expansion*planes))defforward(self, x):
  70. out = F.relu(self.bn1(self.conv1(x)))
  71. out = F.relu(self.bn2(self.conv2(out)))
  72. out = self.bn3(self.conv3(out))
  73. CBAM_Cout = self.channel(out)
  74. out = out * CBAM_Cout
  75. CBAM_Sout = self.spatial(out)
  76. out = out * CBAM_Sout
  77. out += self.shortcut(x)
  78. out = F.relu(out)return out
  79. classCBAM_ResNet(nn.Module):def__init__(self, block, num_blocks, num_classes=1000):super(CBAM_ResNet, self).__init__()
  80. self.in_planes =64
  81. self.conv1 = nn.Conv2d(3,64, kernel_size=3,
  82. stride=1, padding=1, bias=False)# conv1
  83. self.bn1 = nn.BatchNorm2d(64)
  84. self.layer1 = self._make_layer(block,64, num_blocks[0], stride=1)# conv2_x
  85. self.layer2 = self._make_layer(block,128, num_blocks[1], stride=2)# conv3_x
  86. self.layer3 = self._make_layer(block,256, num_blocks[2], stride=2)# conv4_x
  87. self.layer4 = self._make_layer(block,512, num_blocks[3], stride=2)# conv5_x
  88. self.avgpool = nn.AdaptiveAvgPool2d((1,1))
  89. self.linear = nn.Linear(512* block.expansion, num_classes)def_make_layer(self, block, planes, num_blocks, stride):
  90. strides =[stride]+[1]*(num_blocks-1)
  91. layers =[]for stride in strides:
  92. layers.append(block(self.in_planes, planes, stride))
  93. self.in_planes = planes * block.expansion
  94. return nn.Sequential(*layers)defforward(self, x):
  95. x = F.relu(self.bn1(self.conv1(x)))
  96. x = self.layer1(x)
  97. x = self.layer2(x)
  98. x = self.layer3(x)
  99. x = self.layer4(x)
  100. x = self.avgpool(x)
  101. x = torch.flatten(x,1)
  102. out = self.linear(x)return out
  103. defCBAM_ResNet18():return CBAM_ResNet(BasicBlock,[2,2,2,2])defCBAM_ResNet34():return CBAM_ResNet(BasicBlock,[3,4,6,3])defCBAM_ResNet50():return CBAM_ResNet(Bottleneck,[3,4,6,3])defCBAM_ResNet101():return CBAM_ResNet(Bottleneck,[3,4,23,3])defCBAM_ResNet152():return CBAM_ResNet(Bottleneck,[3,8,36,3])deftest():
  104. net = CBAM_ResNet50()
  105. y = net(torch.randn(1,3,224,224))print(y.size())
  106. summary(net,(1,3,224,224))if __name__ =='__main__':
  107. test()

输出结果:

  1. torch.Size([1,1000])===============================================================================================
  2. Layer (type:depth-idx) Output Shape Param #===============================================================================================
  3. CBAM_ResNet ----
  4. ├─Conv2d:1-1[1,64,224,224]1,728
  5. ├─BatchNorm2d:1-2[1,64,224,224]128
  6. ├─Sequential:1-3[1,256,224,224]--
  7. └─Bottleneck:2-1[1,256,224,224]--
  8. └─Conv2d:3-1[1,64,224,224]4,096
  9. └─BatchNorm2d:3-2[1,64,224,224]128
  10. └─Conv2d:3-3[1,64,224,224]36,864
  11. └─BatchNorm2d:3-4[1,64,224,224]128
  12. └─Conv2d:3-5[1,256,224,224]16,384
  13. └─BatchNorm2d:3-6[1,256,224,224]512
  14. └─ChannelAttention:3-7[1,256,1,1]8,192
  15. └─SpatialAttention:3-8[1,1,1,1]98
  16. └─Sequential:3-9[1,256,224,224]16,896
  17. └─Bottleneck:2-2[1,256,224,224]--
  18. └─Conv2d:3-10[1,64,224,224]16,384
  19. └─BatchNorm2d:3-11[1,64,224,224]128
  20. └─Conv2d:3-12[1,64,224,224]36,864
  21. └─BatchNorm2d:3-13[1,64,224,224]128
  22. └─Conv2d:3-14[1,256,224,224]16,384
  23. └─BatchNorm2d:3-15[1,256,224,224]512
  24. └─ChannelAttention:3-16[1,256,1,1]8,192
  25. └─SpatialAttention:3-17[1,1,1,1]98
  26. └─Sequential:3-18[1,256,224,224]--
  27. └─Bottleneck:2-3[1,256,224,224]--
  28. └─Conv2d:3-19[1,64,224,224]16,384
  29. └─BatchNorm2d:3-20[1,64,224,224]128
  30. └─Conv2d:3-21[1,64,224,224]36,864
  31. └─BatchNorm2d:3-22[1,64,224,224]128
  32. └─Conv2d:3-23[1,256,224,224]16,384
  33. └─BatchNorm2d:3-24[1,256,224,224]512
  34. └─ChannelAttention:3-25[1,256,1,1]8,192
  35. └─SpatialAttention:3-26[1,1,1,1]98
  36. └─Sequential:3-27[1,256,224,224]--
  37. ├─Sequential:1-4[1,512,112,112]--
  38. └─Bottleneck:2-4[1,512,112,112]--
  39. └─Conv2d:3-28[1,128,224,224]32,768
  40. └─BatchNorm2d:3-29[1,128,224,224]256
  41. └─Conv2d:3-30[1,128,112,112]147,456
  42. └─BatchNorm2d:3-31[1,128,112,112]256
  43. └─Conv2d:3-32[1,512,112,112]65,536
  44. └─BatchNorm2d:3-33[1,512,112,112]1,024
  45. └─ChannelAttention:3-34[1,512,1,1]32,768
  46. └─SpatialAttention:3-35[1,1,1,1]98
  47. └─Sequential:3-36[1,512,112,112]132,096
  48. └─Bottleneck:2-5[1,512,112,112]--
  49. └─Conv2d:3-37[1,128,112,112]65,536
  50. └─BatchNorm2d:3-38[1,128,112,112]256
  51. └─Conv2d:3-39[1,128,112,112]147,456
  52. └─BatchNorm2d:3-40[1,128,112,112]256
  53. └─Conv2d:3-41[1,512,112,112]65,536
  54. └─BatchNorm2d:3-42[1,512,112,112]1,024
  55. └─ChannelAttention:3-43[1,512,1,1]32,768
  56. └─SpatialAttention:3-44[1,1,1,1]98
  57. └─Sequential:3-45[1,512,112,112]--
  58. └─Bottleneck:2-6[1,512,112,112]--
  59. └─Conv2d:3-46[1,128,112,112]65,536
  60. └─BatchNorm2d:3-47[1,128,112,112]256
  61. └─Conv2d:3-48[1,128,112,112]147,456
  62. └─BatchNorm2d:3-49[1,128,112,112]256
  63. └─Conv2d:3-50[1,512,112,112]65,536
  64. └─BatchNorm2d:3-51[1,512,112,112]1,024
  65. └─ChannelAttention:3-52[1,512,1,1]32,768
  66. └─SpatialAttention:3-53[1,1,1,1]98
  67. └─Sequential:3-54[1,512,112,112]--
  68. └─Bottleneck:2-7[1,512,112,112]--
  69. └─Conv2d:3-55[1,128,112,112]65,536
  70. └─BatchNorm2d:3-56[1,128,112,112]256
  71. └─Conv2d:3-57[1,128,112,112]147,456
  72. └─BatchNorm2d:3-58[1,128,112,112]256
  73. └─Conv2d:3-59[1,512,112,112]65,536
  74. └─BatchNorm2d:3-60[1,512,112,112]1,024
  75. └─ChannelAttention:3-61[1,512,1,1]32,768
  76. └─SpatialAttention:3-62[1,1,1,1]98
  77. └─Sequential:3-63[1,512,112,112]--
  78. ├─Sequential:1-5[1,1024,56,56]--
  79. └─Bottleneck:2-8[1,1024,56,56]--
  80. └─Conv2d:3-64[1,256,112,112]131,072
  81. └─BatchNorm2d:3-65[1,256,112,112]512
  82. └─Conv2d:3-66[1,256,56,56]589,824
  83. └─BatchNorm2d:3-67[1,256,56,56]512
  84. └─Conv2d:3-68[1,1024,56,56]262,144
  85. └─BatchNorm2d:3-69[1,1024,56,56]2,048
  86. └─ChannelAttention:3-70[1,1024,1,1]131,072
  87. └─SpatialAttention:3-71[1,1,1,1]98
  88. └─Sequential:3-72[1,1024,56,56]526,336
  89. └─Bottleneck:2-9[1,1024,56,56]--
  90. └─Conv2d:3-73[1,256,56,56]262,144
  91. └─BatchNorm2d:3-74[1,256,56,56]512
  92. └─Conv2d:3-75[1,256,56,56]589,824
  93. └─BatchNorm2d:3-76[1,256,56,56]512
  94. └─Conv2d:3-77[1,1024,56,56]262,144
  95. └─BatchNorm2d:3-78[1,1024,56,56]2,048
  96. └─ChannelAttention:3-79[1,1024,1,1]131,072
  97. └─SpatialAttention:3-80[1,1,1,1]98
  98. └─Sequential:3-81[1,1024,56,56]--
  99. └─Bottleneck:2-10[1,1024,56,56]--
  100. └─Conv2d:3-82[1,256,56,56]262,144
  101. └─BatchNorm2d:3-83[1,256,56,56]512
  102. └─Conv2d:3-84[1,256,56,56]589,824
  103. └─BatchNorm2d:3-85[1,256,56,56]512
  104. └─Conv2d:3-86[1,1024,56,56]262,144
  105. └─BatchNorm2d:3-87[1,1024,56,56]2,048
  106. └─ChannelAttention:3-88[1,1024,1,1]131,072
  107. └─SpatialAttention:3-89[1,1,1,1]98
  108. └─Sequential:3-90[1,1024,56,56]--
  109. └─Bottleneck:2-11[1,1024,56,56]--
  110. └─Conv2d:3-91[1,256,56,56]262,144
  111. └─BatchNorm2d:3-92[1,256,56,56]512
  112. └─Conv2d:3-93[1,256,56,56]589,824
  113. └─BatchNorm2d:3-94[1,256,56,56]512
  114. └─Conv2d:3-95[1,1024,56,56]262,144
  115. └─BatchNorm2d:3-96[1,1024,56,56]2,048
  116. └─ChannelAttention:3-97[1,1024,1,1]131,072
  117. └─SpatialAttention:3-98[1,1,1,1]98
  118. └─Sequential:3-99[1,1024,56,56]--
  119. └─Bottleneck:2-12[1,1024,56,56]--
  120. └─Conv2d:3-100[1,256,56,56]262,144
  121. └─BatchNorm2d:3-101[1,256,56,56]512
  122. └─Conv2d:3-102[1,256,56,56]589,824
  123. └─BatchNorm2d:3-103[1,256,56,56]512
  124. └─Conv2d:3-104[1,1024,56,56]262,144
  125. └─BatchNorm2d:3-105[1,1024,56,56]2,048
  126. └─ChannelAttention:3-106[1,1024,1,1]131,072
  127. └─SpatialAttention:3-107[1,1,1,1]98
  128. └─Sequential:3-108[1,1024,56,56]--
  129. └─Bottleneck:2-13[1,1024,56,56]--
  130. └─Conv2d:3-109[1,256,56,56]262,144
  131. └─BatchNorm2d:3-110[1,256,56,56]512
  132. └─Conv2d:3-111[1,256,56,56]589,824
  133. └─BatchNorm2d:3-112[1,256,56,56]512
  134. └─Conv2d:3-113[1,1024,56,56]262,144
  135. └─BatchNorm2d:3-114[1,1024,56,56]2,048
  136. └─ChannelAttention:3-115[1,1024,1,1]131,072
  137. └─SpatialAttention:3-116[1,1,1,1]98
  138. └─Sequential:3-117[1,1024,56,56]--
  139. ├─Sequential:1-6[1,2048,28,28]--
  140. └─Bottleneck:2-14[1,2048,28,28]--
  141. └─Conv2d:3-118[1,512,56,56]524,288
  142. └─BatchNorm2d:3-119[1,512,56,56]1,024
  143. └─Conv2d:3-120[1,512,28,28]2,359,296
  144. └─BatchNorm2d:3-121[1,512,28,28]1,024
  145. └─Conv2d:3-122[1,2048,28,28]1,048,576
  146. └─BatchNorm2d:3-123[1,2048,28,28]4,096
  147. └─ChannelAttention:3-124[1,2048,1,1]524,288
  148. └─SpatialAttention:3-125[1,1,1,1]98
  149. └─Sequential:3-126[1,2048,28,28]2,101,248
  150. └─Bottleneck:2-15[1,2048,28,28]--
  151. └─Conv2d:3-127[1,512,28,28]1,048,576
  152. └─BatchNorm2d:3-128[1,512,28,28]1,024
  153. └─Conv2d:3-129[1,512,28,28]2,359,296
  154. └─BatchNorm2d:3-130[1,512,28,28]1,024
  155. └─Conv2d:3-131[1,2048,28,28]1,048,576
  156. └─BatchNorm2d:3-132[1,2048,28,28]4,096
  157. └─ChannelAttention:3-133[1,2048,1,1]524,288
  158. └─SpatialAttention:3-134[1,1,1,1]98
  159. └─Sequential:3-135[1,2048,28,28]--
  160. └─Bottleneck:2-16[1,2048,28,28]--
  161. └─Conv2d:3-136[1,512,28,28]1,048,576
  162. └─BatchNorm2d:3-137[1,512,28,28]1,024
  163. └─Conv2d:3-138[1,512,28,28]2,359,296
  164. └─BatchNorm2d:3-139[1,512,28,28]1,024
  165. └─Conv2d:3-140[1,2048,28,28]1,048,576
  166. └─BatchNorm2d:3-141[1,2048,28,28]4,096
  167. └─ChannelAttention:3-142[1,2048,1,1]524,288
  168. └─SpatialAttention:3-143[1,1,1,1]98
  169. └─Sequential:3-144[1,2048,28,28]--
  170. ├─AdaptiveAvgPool2d:1-7[1,2048,1,1]--
  171. ├─Linear:1-8[1,1000]2,049,000===============================================================================================
  172. Total params:28,065,864
  173. Trainable params:28,065,864
  174. Non-trainable params:0
  175. Total mult-adds (G):63.60===============================================================================================
  176. Input size (MB):0.60
  177. Forward/backward pass size (MB):2691.18
  178. Params size (MB):112.26
  179. Estimated Total Size (MB):2804.04===============================================================================================

本文转载自: https://blog.csdn.net/weixin_45084253/article/details/124270271
版权归原作者 Horizon Max 所有, 如有侵权,请联系我们删除。

“[ 注意力机制 ] 经典网络模型2——CBAM 详解与复现”的评论:

还没有评论