0


【YOLOv5-6.x】设置可学习权重结合BiFPN(Add操作)

文章目录

前言

在之前的这篇博客中,简要介绍了BiFPN的原理,以及YOLOv5作者如何结合BiFPN:【魔改YOLOv5-6.x(中)】:加入ACON激活函数、CBAM和CA注意力机制、加权双向特征金字塔BiFPN

本文将尝试进一步结合BiFPN,主要参考自:YOLOv5结合BiFPN

修改yaml文件(以yolov5s为例)

只修改一处

本文以yolov5s.yaml为例进行修改,修改模型配置文件时要注意以下几点:

  • 这里的yaml文件只修改了一处,也就是将19层的Concat换成了BiFPN_Add,要想修改其他层的Concat,可以类比进行修改
  • BiFPN_Add本质是add操作,不是concat操作,因此,BiFPN_Add的各个输入层要求大小完全一致(通道数、feature map大小等),因此,这里要修改之前的参数[-1, 13, 6],来满足这个要求: - -1层就是上一层的输出,原来上一层的输出channel数为256,这里改成512- 13层就是这里[-1, 3, C3, [512, False]], # 13- 这样修改后,BiFPN_Add各个输入大小都是[bs,256,40,40]- 最后BiFPN_Add后面的参数层设置为[256, 256]也就是输入输出channel数都是256
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Parameters
nc:80# number of classes
depth_multiple:0.33# model depth multiple
width_multiple:0.50# layer channel multiple
anchors:-[10,13,16,30,33,23]# P3/8-[30,61,62,45,59,119]# P4/16-[116,90,156,198,373,326]# P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1,1, Conv,[64,6,2,2]],# 0-P1/2[-1,1, Conv,[128,3,2]],# 1-P2/4[-1,3, C3,[128]],[-1,1, Conv,[256,3,2]],# 3-P3/8[-1,6, C3,[256]],[-1,1, Conv,[512,3,2]],# 5-P4/16[-1,9, C3,[512]],[-1,1, Conv,[1024,3,2]],# 7-P5/32[-1,3, C3,[1024]],[-1,1, SPPF,[1024,5]],# 9]# YOLOv5 v6.0 BiFPN head
head:[[-1,1, Conv,[512,1,1]],[-1,1, nn.Upsample,[None,2,'nearest']],[[-1,6],1, Concat,[1]],# cat backbone P4[-1,3, C3,[512,False]],# 13[-1,1, Conv,[256,1,1]],[-1,1, nn.Upsample,[None,2,'nearest']],[[-1,4],1, Concat,[1]],# cat backbone P3[-1,3, C3,[256,False]],# 17 (P3/8-small)[-1,1, Conv,[512,3,2]],# 为了BiFPN正确add,调整channel数[[-1,13,6],1, BiFPN_Add3,[256,256]],# cat P4 <--- BiFPN change 注意v5s通道数是默认参数的一半[-1,3, C3,[512,False]],# 20 (P4/16-medium)[-1,1, Conv,[512,3,2]],[[-1,10],1, Concat,[1]],# cat head P5[-1,3, C3,[1024,False]],# 23 (P5/32-large)[[17,20,23],1, Detect,[nc, anchors]],# Detect(P3, P4, P5)]

将Concat全部换成BiFPN_Add

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Parameters
nc:80# number of classes
depth_multiple:0.33# model depth multiple
width_multiple:0.50# layer channel multiple
anchors:-[10,13,16,30,33,23]# P3/8-[30,61,62,45,59,119]# P4/16-[116,90,156,198,373,326]# P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1,1, Conv,[64,6,2,2]],# 0-P1/2[-1,1, Conv,[128,3,2]],# 1-P2/4[-1,3, C3,[128]],[-1,1, Conv,[256,3,2]],# 3-P3/8[-1,6, C3,[256]],[-1,1, Conv,[512,3,2]],# 5-P4/16[-1,9, C3,[512]],[-1,1, Conv,[1024,3,2]],# 7-P5/32[-1,3, C3,[1024]],[-1,1, SPPF,[1024,5]],# 9]# YOLOv5 v6.0 BiFPN head
head:[[-1,1, Conv,[512,1,1]],[-1,1, nn.Upsample,[None,2,'nearest']],[[-1,6],1, BiFPN_Add2,[256,256]],# cat backbone P4[-1,3, C3,[512,False]],# 13[-1,1, Conv,[256,1,1]],[-1,1, nn.Upsample,[None,2,'nearest']],[[-1,4],1, BiFPN_Add2,[128,128]],# cat backbone P3[-1,3, C3,[256,False]],# 17 (P3/8-small)[-1,1, Conv,[512,3,2]],# 为了BiFPN正确add,调整channel数[[-1,13,6],1, BiFPN_Add3,[256,256]],# cat P4 <--- BiFPN change 注意v5s通道数是默认参数的一半[-1,3, C3,[512,False]],# 20 (P4/16-medium)[-1,1, Conv,[512,3,2]],[[-1,10],1, BiFPN_Add2,[256,256]],# cat head P5[-1,3, C3,[1024,False]],# 23 (P5/32-large)[[17,20,23],1, Detect,[nc, anchors]],# Detect(P3, P4, P5)]

打印模型参数

可以参考这篇博客:【YOLOv5-6.x】模型参数及detect层输出测试(自用),进行模型配置文件测试并查看输出结果:

from  n    params  module                                  arguments                     
  0-113520  models.common.Conv                      [3,32,6,2,2]1-1118560  models.common.Conv                      [32,64,3,2]2-1118816  models.common.C3                        [64,64,1]3-1173984  models.common.Conv                      [64,128,3,2]4-12115712  models.common.C3                        [128,128,2]5-11295424  models.common.Conv                      [128,256,3,2]6-13625152  models.common.C3                        [256,256,3]7-111180672  models.common.Conv                      [256,512,3,2]8-111182720  models.common.C3                        [512,512,1]9-11656896  models.common.SPPF                      [512,512,5]10-11131584  models.common.Conv                      [512,256,1,1]11-110  torch.nn.modules.upsampling.Upsample    [None,2,'nearest']12[-1,6]165794  models.common.BiFPN_Add2                [256,256]13-11296448  models.common.C3                        [256,256,1,False]14-1133024  models.common.Conv                      [256,128,1,1]15-110  torch.nn.modules.upsampling.Upsample    [None,2,'nearest']16[-1,4]116514  models.common.BiFPN_Add2                [128,128]17-1174496  models.common.C3                        [128,128,1,False]18-11295424  models.common.Conv                      [128,256,3,2]19[-1,13,6]165795  models.common.BiFPN_Add3                [256,256]20-11296448  models.common.C3                        [256,256,1,False]21-11590336  models.common.Conv                      [256,256,3,2]22[-1,10]165794  models.common.BiFPN_Add2                [256,256]23-111051648  models.common.C3                        [256,512,1,False]24[17,20,23]1229245  models.yolo.Detect                      [80,[[10,13,16,30,33,23],[30,61,62,45,59,119],[116,90,156,198,373,326]],[128,256,512]]
Model Summary:278 layers,7384006 parameters,7384006 gradients,17.2 GFLOPs

修改common.py

  • 复制粘贴一下代码:
# 结合BiFPN 设置可学习参数 学习不同分支的权重# 两个分支add操作classBiFPN_Add2(nn.Module):def__init__(self, c1, c2):super(BiFPN_Add2, self).__init__()# 设置可学习参数 nn.Parameter的作用是:将一个不可训练的类型Tensor转换成可以训练的类型parameter# 并且会向宿主模型注册该参数 成为其一部分 即model.parameters()会包含这个parameter# 从而在参数优化的时候可以自动一起优化
        self.w = nn.Parameter(torch.ones(2, dtype=torch.float32), requires_grad=True)
        self.epsilon =0.0001
        self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)
        self.silu = nn.SiLU()defforward(self, x):
        w = self.w
        weight = w /(torch.sum(w, dim=0)+ self.epsilon)return self.conv(self.silu(weight[0]* x[0]+ weight[1]* x[1]))# 三个分支add操作classBiFPN_Add3(nn.Module):def__init__(self, c1, c2):super(BiFPN_Add3, self).__init__()
        self.w = nn.Parameter(torch.ones(3, dtype=torch.float32), requires_grad=True)
        self.epsilon =0.0001
        self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)
        self.silu = nn.SiLU()defforward(self, x):
        w = self.w
        weight = w /(torch.sum(w, dim=0)+ self.epsilon)# 将权重进行归一化# Fast normalized fusionreturn self.conv(self.silu(weight[0]* x[0]+ weight[1]* x[1]+ weight[2]* x[2]))

修改yolo.py

  • parse_model函数中找到elif m is Concat:语句,在其后面加上BiFPN_Add相关语句:
elif m is Concat:
    c2 =sum(ch[x]for x in f)# 添加bifpn_add结构elif m in[BiFPN_Add2, BiFPN_Add3]:
    c2 =max([ch[x]for x in f])

修改train.py

1. 向优化器中添加BiFPN的权重参数

  • BiFPN_Add2BiFPN_Add3函数中定义的w参数,加入g1
    g0, g1, g2 =[],[],[]# optimizer parameter groupsfor v in model.modules():# hasattr: 测试指定的对象是否具有给定的属性,返回一个布尔值ifhasattr(v,'bias')andisinstance(v.bias, nn.Parameter):# bias
            g2.append(v.bias)# biasesifisinstance(v, nn.BatchNorm2d):# weight (no decay)
            g0.append(v.weight)elifhasattr(v,'weight')andisinstance(v.weight, nn.Parameter):# weight (with decay)
            g1.append(v.weight)# BiFPN_Concatelifisinstance(v, BiFPN_Add2)andhasattr(v,'w')andisinstance(v.w, nn.Parameter):
            g1.append(v.w)elifisinstance(v, BiFPN_Add3)andhasattr(v,'w')andisinstance(v.w, nn.Parameter):
            g1.append(v.w)

2. 查看BiFPN_Add层参数更新情况

想要查看BiFPN_Add层的参数更新情况,可以参考这篇博客【Pytorch】查看模型某一层的参数数值(自用),直接定位到

w

参数,随着模型训练输出对应的值。

References

YOLOv5结合BiFPN

【论文笔记】EfficientDet(BiFPN)(2020)

nn.Module、nn.Sequential和torch.nn.parameter学习笔记


本文转载自: https://blog.csdn.net/weixin_43799388/article/details/124091648
版权归原作者 嗜睡的篠龙 所有, 如有侵权,请联系我们删除。

“【YOLOv5-6.x】设置可学习权重结合BiFPN(Add操作)”的评论:

还没有评论