0


YOLOv8改进 | 2023主干篇 | EfficientViT替换Backbone(高效的视觉变换网络)

一、本文介绍

本文给大家带来的改进机制是EfficientViT****(高效的视觉变换网络),EfficientViT的核心是一种轻量级的多尺度线性注意力模块,能够在只使用硬件高效操作的情况下实现全局感受野和多尺度学习。本文带来是2023年的最新版本的EfficientViT网络结构,论文题目是'EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction'这个版本的模型结构**(这点大家需要注意以下)。同时本文通过介绍其模型原理,然后手把手教你添加到网络结构中去,最后提供我完美运行的记录,如果大家运行过程中的有任何问题,都可以评论区留言,我都会进行回复。亲测在小目标检测和大尺度目标检测的数据集上都有大幅度的涨点效果(mAP直接涨了大概有0.1左右)**

推荐指数:⭐⭐⭐⭐⭐

涨点效果:⭐⭐⭐⭐⭐

专栏回顾:YOLOv8改进系列专栏——本专栏持续复习各种顶会内容——科研必备** **

**训练结果对比图-> **

这次试验我用的数据集大概有七八百张照片训练了150个epochs,虽然没有完全拟合但是效果有很高的涨点幅度,所以大家可以进行尝试毕竟不同的数据集上效果也可能差很多,同时我在后面给了多种yaml文件大家可以分别进行实验来检验效果。

可以看到这个涨点幅度mAP直接涨了大概有0.1左右。

二、EfficientViT模型原理

论文地址:官方论文地址

代码地址:官方代码地址


2.1 EfficientViT的基本原理

EfficientViT是一种高效的视觉变换网络,专为处理高分辨率图像而设计。它通过创新的多尺度线性注意力机制来提高模型的性能,同时减少计算成本。这种模型优化了注意力机制,使其更适合于硬件实现,能够在多种硬件平台上,包括移动CPU、边缘GPU和云GPU上实现快速的图像处理。相比于传统的高分辨率密集预测模型,EfficientViT在保持高性能的同时,大幅提高了计算效率。

我们可以将EfficientViT的基本原理概括为以下几点:

1. 多尺度线性注意力机制:EfficientViT采用了一种新型的多尺度线性注意力机制,这种方法旨在提高模型处理高分辨率图像时的效率和效果。

2. 轻量级和硬件高效操作:与传统的高分辨率密集预测模型不同,EfficientViT通过轻量级和硬件高效的操作来实现全局感受野和多尺度学习,这有助于降低计算成本。

3. 显著的性能提升和速度加快:在多种硬件平台上,包括移动CPU、边缘GPU和云GPU,EfficientViT实现了相比之前的模型显著的性能提升和加速。


2.2 多尺度线性注意力机制

多尺度线性注意力机制是一种轻量级的注意力模块,用于提高处理高分辨率图像时的效率。它旨在通过简化的操作来实现全局感受野和多尺度学习,这对于高分辨率密集预测尤其重要。这种注意力机制在保持硬件效率的同时,能够有效捕获长距离依赖关系,是高分辨率视觉识别任务的理想选择。

下图展示了EfficientViT的构建模块,左侧是EfficientViT的基本构建块,包括多尺度线性注意力模块和带有深度卷积的前馈网络(FFN+DWConv)。右侧详细展示了多尺度线性注意力,它通过聚合邻近令牌来获得多尺度的Q/K/V令牌。

在通过线性投影层得到Q/K/V令牌之后,使用轻量级的小核卷积生成多尺度令牌,然后通过ReLU线性注意力对这些多尺度令牌进行处理。最后,这些输出被联合起来,送入最终的线性投影层以进行特征融合。这种设计旨在以计算和存储效率高的方式捕获上下文信息和局部信息。


2.3 轻量级和硬件高效操作

EfficientViT中的轻量级和硬件高效操作主要指的是在模型中采用了简化的注意力机制和卷积操作,这些设计使得EfficientViT能够在各种硬件平台上高效运行。具体来说,模型通过使用多尺度线性注意力和深度卷积的前馈网络,以及在注意力模块中避免使用计算成本高的Softmax函数,实现了既保持模型性能又显著减少计算复杂性的目标。这些操作包括使用多尺度线性注意力机制来替代传统的Softmax注意力,以及采用深度可分离卷积(Depthwise Convolution)来减少参数和计算量。

下图为大家展示的是EfficientViT的宏观架构:

EfficientViT的宏观架构包括一个标准的后端骨干网络和头部/编码器-解码器设计。EfficientViT模块被插入到骨干网络的第三和第四阶段。这种设计遵循了常见的做法,即将来自最后三个阶段(P2, P3, 和 P4)的特征送入头部,并采用加法来融合这些特征,以简化和提高效率。头部设计简单,由几个MBConv块和输出层组成。在这个框架中,EfficientViT通过提供一种新的轻量级多尺度注意力机制,能够高效处理高分辨率的图像,同时保持对不同硬件平台的适应性。


2.4 显著的性能提升和速度加快

显著性能提升和速度加快主要是指模型在各种硬件平台上,相对于以前的模型,在图像处理任务中表现出了更好的效率和速度。这得益于EfficientViT在设计上的优化,如多尺度线性注意力和深度可分离卷积等。这些改进使得模型在处理高分辨率任务时,如城市景观(Cityscapes)数据集,能够在保持性能的同时大幅减少计算延迟。在某些应用中,EfficientViT与现有最先进模型相比,提供了多达数倍的GPU延迟降低,这些优化使其在资源受限的设备上具有很高的实用性。

三、EfficienViT的完整代码

  1. import torch.nn as nn
  2. import torch
  3. from inspect import signature
  4. from timm.models.efficientvit_mit import val2tuple, ResidualBlock
  5. from torch.cuda.amp import autocast
  6. import torch.nn.functional as F
  7. class LayerNorm2d(nn.LayerNorm):
  8. def forward(self, x: torch.Tensor) -> torch.Tensor:
  9. out = x - torch.mean(x, dim=1, keepdim=True)
  10. out = out / torch.sqrt(torch.square(out).mean(dim=1, keepdim=True) + self.eps)
  11. if self.elementwise_affine:
  12. out = out * self.weight.view(1, -1, 1, 1) + self.bias.view(1, -1, 1, 1)
  13. return out
  14. REGISTERED_NORM_DICT: dict[str, type] = {
  15. "bn2d": nn.BatchNorm2d,
  16. "ln": nn.LayerNorm,
  17. "ln2d": LayerNorm2d,
  18. }
  19. # register activation function here
  20. REGISTERED_ACT_DICT: dict[str, type] = {
  21. "relu": nn.ReLU,
  22. "relu6": nn.ReLU6,
  23. "hswish": nn.Hardswish,
  24. "silu": nn.SiLU,
  25. }
  26. class FusedMBConv(nn.Module):
  27. def __init__(
  28. self,
  29. in_channels: int,
  30. out_channels: int,
  31. kernel_size=3,
  32. stride=1,
  33. mid_channels=None,
  34. expand_ratio=6,
  35. groups=1,
  36. use_bias=False,
  37. norm=("bn2d", "bn2d"),
  38. act_func=("relu6", None),
  39. ):
  40. super().__init__()
  41. use_bias = val2tuple(use_bias, 2)
  42. norm = val2tuple(norm, 2)
  43. act_func = val2tuple(act_func, 2)
  44. mid_channels = mid_channels or round(in_channels * expand_ratio)
  45. self.spatial_conv = ConvLayer(
  46. in_channels,
  47. mid_channels,
  48. kernel_size,
  49. stride,
  50. groups=groups,
  51. use_bias=use_bias[0],
  52. norm=norm[0],
  53. act_func=act_func[0],
  54. )
  55. self.point_conv = ConvLayer(
  56. mid_channels,
  57. out_channels,
  58. 1,
  59. use_bias=use_bias[1],
  60. norm=norm[1],
  61. act_func=act_func[1],
  62. )
  63. def forward(self, x: torch.Tensor) -> torch.Tensor:
  64. x = self.spatial_conv(x)
  65. x = self.point_conv(x)
  66. return x
  67. class DSConv(nn.Module):
  68. def __init__(
  69. self,
  70. in_channels: int,
  71. out_channels: int,
  72. kernel_size=3,
  73. stride=1,
  74. use_bias=False,
  75. norm=("bn2d", "bn2d"),
  76. act_func=("relu6", None),
  77. ):
  78. super(DSConv, self).__init__()
  79. use_bias = val2tuple(use_bias, 2)
  80. norm = val2tuple(norm, 2)
  81. act_func = val2tuple(act_func, 2)
  82. self.depth_conv = ConvLayer(
  83. in_channels,
  84. in_channels,
  85. kernel_size,
  86. stride,
  87. groups=in_channels,
  88. norm=norm[0],
  89. act_func=act_func[0],
  90. use_bias=use_bias[0],
  91. )
  92. self.point_conv = ConvLayer(
  93. in_channels,
  94. out_channels,
  95. 1,
  96. norm=norm[1],
  97. act_func=act_func[1],
  98. use_bias=use_bias[1],
  99. )
  100. def forward(self, x: torch.Tensor) -> torch.Tensor:
  101. x = self.depth_conv(x)
  102. x = self.point_conv(x)
  103. return x
  104. class MBConv(nn.Module):
  105. def __init__(
  106. self,
  107. in_channels: int,
  108. out_channels: int,
  109. kernel_size=3,
  110. stride=1,
  111. mid_channels=None,
  112. expand_ratio=6,
  113. use_bias=False,
  114. norm=("bn2d", "bn2d", "bn2d"),
  115. act_func=("relu6", "relu6", None),
  116. ):
  117. super(MBConv, self).__init__()
  118. use_bias = val2tuple(use_bias, 3)
  119. norm = val2tuple(norm, 3)
  120. act_func = val2tuple(act_func, 3)
  121. mid_channels = mid_channels or round(in_channels * expand_ratio)
  122. self.inverted_conv = ConvLayer(
  123. in_channels,
  124. mid_channels,
  125. 1,
  126. stride=1,
  127. norm=norm[0],
  128. act_func=act_func[0],
  129. use_bias=use_bias[0],
  130. )
  131. self.depth_conv = ConvLayer(
  132. mid_channels,
  133. mid_channels,
  134. kernel_size,
  135. stride=stride,
  136. groups=mid_channels,
  137. norm=norm[1],
  138. act_func=act_func[1],
  139. use_bias=use_bias[1],
  140. )
  141. self.point_conv = ConvLayer(
  142. mid_channels,
  143. out_channels,
  144. 1,
  145. norm=norm[2],
  146. act_func=act_func[2],
  147. use_bias=use_bias[2],
  148. )
  149. def forward(self, x: torch.Tensor) -> torch.Tensor:
  150. x = self.inverted_conv(x)
  151. x = self.depth_conv(x)
  152. x = self.point_conv(x)
  153. return x
  154. class EfficientViTBlock(nn.Module):
  155. def __init__(
  156. self,
  157. in_channels: int,
  158. heads_ratio: float = 1.0,
  159. dim=32,
  160. expand_ratio: float = 4,
  161. norm="bn2d",
  162. act_func="hswish",
  163. ):
  164. super(EfficientViTBlock, self).__init__()
  165. self.context_module = ResidualBlock(
  166. LiteMLA(
  167. in_channels=in_channels,
  168. out_channels=in_channels,
  169. heads_ratio=heads_ratio,
  170. dim=dim,
  171. norm=(None, norm),
  172. ),
  173. IdentityLayer(),
  174. )
  175. local_module = MBConv(
  176. in_channels=in_channels,
  177. out_channels=in_channels,
  178. expand_ratio=expand_ratio,
  179. use_bias=(True, True, False),
  180. norm=(None, None, norm),
  181. act_func=(act_func, act_func, None),
  182. )
  183. self.local_module = ResidualBlock(local_module, IdentityLayer())
  184. def forward(self, x: torch.Tensor) -> torch.Tensor:
  185. x = self.context_module(x)
  186. x = self.local_module(x)
  187. return x
  188. class ResBlock(nn.Module):
  189. def __init__(
  190. self,
  191. in_channels: int,
  192. out_channels: int,
  193. kernel_size=3,
  194. stride=1,
  195. mid_channels=None,
  196. expand_ratio=1,
  197. use_bias=False,
  198. norm=("bn2d", "bn2d"),
  199. act_func=("relu6", None),
  200. ):
  201. super().__init__()
  202. use_bias = val2tuple(use_bias, 2)
  203. norm = val2tuple(norm, 2)
  204. act_func = val2tuple(act_func, 2)
  205. mid_channels = mid_channels or round(in_channels * expand_ratio)
  206. self.conv1 = ConvLayer(
  207. in_channels,
  208. mid_channels,
  209. kernel_size,
  210. stride,
  211. use_bias=use_bias[0],
  212. norm=norm[0],
  213. act_func=act_func[0],
  214. )
  215. self.conv2 = ConvLayer(
  216. mid_channels,
  217. out_channels,
  218. kernel_size,
  219. 1,
  220. use_bias=use_bias[1],
  221. norm=norm[1],
  222. act_func=act_func[1],
  223. )
  224. def forward(self, x: torch.Tensor) -> torch.Tensor:
  225. x = self.conv1(x)
  226. x = self.conv2(x)
  227. return x
  228. class LiteMLA(nn.Module):
  229. r"""Lightweight multi-scale linear attention"""
  230. def __init__(
  231. self,
  232. in_channels: int,
  233. out_channels: int,
  234. heads: int or None = None,
  235. heads_ratio: float = 1.0,
  236. dim=8,
  237. use_bias=False,
  238. norm=(None, "bn2d"),
  239. act_func=(None, None),
  240. kernel_func="relu6",
  241. scales: tuple[int, ...] = (5,),
  242. eps=1.0e-15,
  243. ):
  244. super(LiteMLA, self).__init__()
  245. self.eps = eps
  246. heads = heads or int(in_channels // dim * heads_ratio)
  247. total_dim = heads * dim
  248. use_bias = val2tuple(use_bias, 2)
  249. norm = val2tuple(norm, 2)
  250. act_func = val2tuple(act_func, 2)
  251. self.dim = dim
  252. self.qkv = ConvLayer(
  253. in_channels,
  254. 3 * total_dim,
  255. 1,
  256. use_bias=use_bias[0],
  257. norm=norm[0],
  258. act_func=act_func[0],
  259. )
  260. self.aggreg = nn.ModuleList(
  261. [
  262. nn.Sequential(
  263. nn.Conv2d(
  264. 3 * total_dim,
  265. 3 * total_dim,
  266. scale,
  267. padding=get_same_padding(scale),
  268. groups=3 * total_dim,
  269. bias=use_bias[0],
  270. ),
  271. nn.Conv2d(3 * total_dim, 3 * total_dim, 1, groups=3 * heads, bias=use_bias[0]),
  272. )
  273. for scale in scales
  274. ]
  275. )
  276. self.kernel_func = build_act(kernel_func, inplace=False)
  277. self.proj = ConvLayer(
  278. total_dim * (1 + len(scales)),
  279. out_channels,
  280. 1,
  281. use_bias=use_bias[1],
  282. norm=norm[1],
  283. act_func=act_func[1],
  284. )
  285. @autocast(enabled=False)
  286. def relu_linear_att(self, qkv: torch.Tensor) -> torch.Tensor:
  287. B, _, H, W = list(qkv.size())
  288. if qkv.dtype == torch.float16:
  289. qkv = qkv.float()
  290. qkv = torch.reshape(
  291. qkv,
  292. (
  293. B,
  294. -1,
  295. 3 * self.dim,
  296. H * W,
  297. ),
  298. )
  299. qkv = torch.transpose(qkv, -1, -2)
  300. q, k, v = (
  301. qkv[..., 0 : self.dim],
  302. qkv[..., self.dim : 2 * self.dim],
  303. qkv[..., 2 * self.dim :],
  304. )
  305. # lightweight linear attention
  306. q = self.kernel_func(q)
  307. k = self.kernel_func(k)
  308. # linear matmul
  309. trans_k = k.transpose(-1, -2)
  310. v = F.pad(v, (0, 1), mode="constant", value=1)
  311. kv = torch.matmul(trans_k, v)
  312. out = torch.matmul(q, kv)
  313. out = torch.clone(out)
  314. out = out[..., :-1] / (out[..., -1:] + self.eps)
  315. out = torch.transpose(out, -1, -2)
  316. out = torch.reshape(out, (B, -1, H, W))
  317. return out
  318. def forward(self, x: torch.Tensor) -> torch.Tensor:
  319. # generate multi-scale q, k, v
  320. qkv = self.qkv(x)
  321. multi_scale_qkv = [qkv]
  322. device, types = qkv.device, qkv.dtype
  323. for op in self.aggreg:
  324. if device.type == 'cuda' and types == torch.float32:
  325. qkv = qkv.to(torch.float16)
  326. x1 = op(qkv)
  327. multi_scale_qkv.append(x1)
  328. multi_scale_qkv = torch.cat(multi_scale_qkv, dim=1)
  329. out = self.relu_linear_att(multi_scale_qkv)
  330. out = self.proj(out)
  331. return out
  332. @staticmethod
  333. def configure_litemla(model: nn.Module, **kwargs) -> None:
  334. eps = kwargs.get("eps", None)
  335. for m in model.modules():
  336. if isinstance(m, LiteMLA):
  337. if eps is not None:
  338. m.eps = eps
  339. def build_kwargs_from_config(config: dict, target_func: callable) -> dict[str, any]:
  340. valid_keys = list(signature(target_func).parameters)
  341. kwargs = {}
  342. for key in config:
  343. if key in valid_keys:
  344. kwargs[key] = config[key]
  345. return kwargs
  346. def build_norm(name="bn2d", num_features=None, **kwargs) -> nn.Module or None:
  347. if name in ["ln", "ln2d"]:
  348. kwargs["normalized_shape"] = num_features
  349. else:
  350. kwargs["num_features"] = num_features
  351. if name in REGISTERED_NORM_DICT:
  352. norm_cls = REGISTERED_NORM_DICT[name]
  353. args = build_kwargs_from_config(kwargs, norm_cls)
  354. return norm_cls(**args)
  355. else:
  356. return None
  357. def get_same_padding(kernel_size: int or tuple[int, ...]) -> int or tuple[int, ...]:
  358. if isinstance(kernel_size, tuple):
  359. return tuple([get_same_padding(ks) for ks in kernel_size])
  360. else:
  361. assert kernel_size % 2 > 0, "kernel size should be odd number"
  362. return kernel_size // 2
  363. def build_act(name: str, **kwargs) -> nn.Module or None:
  364. if name in REGISTERED_ACT_DICT:
  365. act_cls = REGISTERED_ACT_DICT[name]
  366. args = build_kwargs_from_config(kwargs, act_cls)
  367. return act_cls(**args)
  368. else:
  369. return None
  370. class ConvLayer(nn.Module):
  371. def __init__(
  372. self,
  373. in_channels: int,
  374. out_channels: int,
  375. kernel_size=3,
  376. stride=1,
  377. dilation=1,
  378. groups=1,
  379. use_bias=False,
  380. dropout=0,
  381. norm="bn2d",
  382. act_func="relu",
  383. ):
  384. super(ConvLayer, self).__init__()
  385. padding = get_same_padding(kernel_size)
  386. padding *= dilation
  387. self.dropout = nn.Dropout2d(dropout, inplace=False) if dropout > 0 else None
  388. self.conv = nn.Conv2d(
  389. in_channels,
  390. out_channels,
  391. kernel_size=(kernel_size, kernel_size),
  392. stride=(stride, stride),
  393. padding=padding,
  394. dilation=(dilation, dilation),
  395. groups=groups,
  396. bias=use_bias,
  397. )
  398. self.norm = build_norm(norm, num_features=out_channels)
  399. self.act = build_act(act_func)
  400. def forward(self, x: torch.Tensor) -> torch.Tensor:
  401. if self.dropout is not None:
  402. x = self.dropout(x)
  403. device, type = x.device, x.dtype
  404. choose = False
  405. if device.type == 'cuda' and type == torch.float32:
  406. x = x.to(torch.float16)
  407. choose = True
  408. x = self.conv(x)
  409. if self.norm:
  410. x = self.norm(x)
  411. if self.act:
  412. x = self.act(x)
  413. if choose:
  414. x = x.to(torch.float16)
  415. return x
  416. class IdentityLayer(nn.Module):
  417. def forward(self, x: torch.Tensor) -> torch.Tensor:
  418. return x
  419. class OpSequential(nn.Module):
  420. def __init__(self, op_list: list[nn.Module or None]):
  421. super(OpSequential, self).__init__()
  422. valid_op_list = []
  423. for op in op_list:
  424. if op is not None:
  425. valid_op_list.append(op)
  426. self.op_list = nn.ModuleList(valid_op_list)
  427. def forward(self, x: torch.Tensor) -> torch.Tensor:
  428. for op in self.op_list:
  429. x = op(x)
  430. return x
  431. class EfficientViTBackbone(nn.Module):
  432. def __init__(
  433. self,
  434. width_list: list[int],
  435. depth_list: list[int],
  436. in_channels=3,
  437. dim=32,
  438. expand_ratio=4,
  439. norm="ln2d",
  440. act_func="hswish",
  441. ) -> None:
  442. super().__init__()
  443. self.width_list = []
  444. # input stem
  445. self.input_stem = [
  446. ConvLayer(
  447. in_channels=3,
  448. out_channels=width_list[0],
  449. stride=2,
  450. norm=norm,
  451. act_func=act_func,
  452. )
  453. ]
  454. for _ in range(depth_list[0]):
  455. block = self.build_local_block(
  456. in_channels=width_list[0],
  457. out_channels=width_list[0],
  458. stride=1,
  459. expand_ratio=1,
  460. norm=norm,
  461. act_func=act_func,
  462. )
  463. self.input_stem.append(ResidualBlock(block, IdentityLayer()))
  464. in_channels = width_list[0]
  465. self.input_stem = OpSequential(self.input_stem)
  466. self.width_list.append(in_channels)
  467. # stages
  468. self.stages = []
  469. for w, d in zip(width_list[1:3], depth_list[1:3]):
  470. stage = []
  471. for i in range(d):
  472. stride = 2 if i == 0 else 1
  473. block = self.build_local_block(
  474. in_channels=in_channels,
  475. out_channels=w,
  476. stride=stride,
  477. expand_ratio=expand_ratio,
  478. norm=norm,
  479. act_func=act_func,
  480. )
  481. block = ResidualBlock(block, IdentityLayer() if stride == 1 else None)
  482. stage.append(block)
  483. in_channels = w
  484. self.stages.append(OpSequential(stage))
  485. self.width_list.append(in_channels)
  486. for w, d in zip(width_list[3:], depth_list[3:]):
  487. stage = []
  488. block = self.build_local_block(
  489. in_channels=in_channels,
  490. out_channels=w,
  491. stride=2,
  492. expand_ratio=expand_ratio,
  493. norm=norm,
  494. act_func=act_func,
  495. fewer_norm=True,
  496. )
  497. stage.append(ResidualBlock(block, None))
  498. in_channels = w
  499. for _ in range(d):
  500. stage.append(
  501. EfficientViTBlock(
  502. in_channels=in_channels,
  503. dim=dim,
  504. expand_ratio=expand_ratio,
  505. norm=norm,
  506. act_func=act_func,
  507. )
  508. )
  509. self.stages.append(OpSequential(stage))
  510. self.width_list.append(in_channels)
  511. self.stages = nn.ModuleList(self.stages)
  512. @staticmethod
  513. def build_local_block(
  514. in_channels: int,
  515. out_channels: int,
  516. stride: int,
  517. expand_ratio: float,
  518. norm: str,
  519. act_func: str,
  520. fewer_norm: bool = False,
  521. ) -> nn.Module:
  522. if expand_ratio == 1:
  523. block = DSConv(
  524. in_channels=in_channels,
  525. out_channels=out_channels,
  526. stride=stride,
  527. use_bias=(True, False) if fewer_norm else False,
  528. norm=(None, norm) if fewer_norm else norm,
  529. act_func=(act_func, None),
  530. )
  531. else:
  532. block = MBConv(
  533. in_channels=in_channels,
  534. out_channels=out_channels,
  535. stride=stride,
  536. expand_ratio=expand_ratio,
  537. use_bias=(True, True, False) if fewer_norm else False,
  538. norm=(None, None, norm) if fewer_norm else norm,
  539. act_func=(act_func, act_func, None),
  540. )
  541. return block
  542. def forward(self, x: torch.Tensor) -> dict[str, torch.Tensor]:
  543. outputs = []
  544. for stage_id, stage in enumerate(self.stages):
  545. x = stage(x)
  546. if x.device.type == 'cuda':
  547. x = x.to(torch.float16)
  548. outputs.append(x)
  549. return outputs
  550. def efficientvit_backbone_b0(**kwargs) -> EfficientViTBackbone:
  551. backbone = EfficientViTBackbone(
  552. width_list=[3, 16, 32, 64, 128],
  553. depth_list=[1, 2, 2, 2, 2],
  554. dim=16,
  555. **build_kwargs_from_config(kwargs, EfficientViTBackbone),
  556. )
  557. return backbone
  558. def efficientvit_backbone_b1(**kwargs) -> EfficientViTBackbone:
  559. backbone = EfficientViTBackbone(
  560. width_list=[3, 32, 64, 128, 256],
  561. depth_list=[1, 2, 3, 3, 4],
  562. dim=16,
  563. **build_kwargs_from_config(kwargs, EfficientViTBackbone),
  564. )
  565. return backbone
  566. def efficientvit_backbone_b2(**kwargs) -> EfficientViTBackbone:
  567. backbone = EfficientViTBackbone(
  568. width_list=[3, 48, 96, 192, 384],
  569. depth_list=[1, 3, 4, 4, 6],
  570. dim=32,
  571. **build_kwargs_from_config(kwargs, EfficientViTBackbone),
  572. )
  573. return backbone
  574. def efficientvit_backbone_b3(**kwargs) -> EfficientViTBackbone:
  575. backbone = EfficientViTBackbone(
  576. width_list=[3, 64, 128, 256, 512],
  577. depth_list=[1, 4, 6, 6, 9],
  578. dim=32,
  579. **build_kwargs_from_config(kwargs, EfficientViTBackbone),
  580. )
  581. return backbone

四、手把手教你添加EfficienViT网络结构

这个主干的网络结构添加起来算是所有的改进机制里最麻烦的了,因为有一些网略结构可以用yaml文件搭建出来,有一些网络结构其中的一些细节根本没有办法用yaml文件去搭建,用yaml文件去搭建会损失一些细节部分(而且一个网络结构设计很多细节的结构修改方式都不一样,一个一个去修改大家难免会出错),所以这里让网络直接返回整个网络,然后修改部分 yolo代码以后就都以这种形式添加了,以后我提出的网络模型基本上都会通过这种方式修改,我也会进行一些模型细节改进。创新出新的网络结构大家直接拿来用就可以的。下面开始添加教程->

(同时每一个后面都有代码,大家拿来复制粘贴替换即可,但是要看好了不要复制粘贴替换多了)


修改一

我们复制网络结构代码到“ultralytics/nn/modules”目录下创建一个py文件复制粘贴进去 ,我这里起的名字是EfficientV2。


修改二

找到如下的文件"ultralytics/nn/tasks.py" 在开始的部分导入我们的模型如下图。

  1. from .modules.EfficientV2 import efficientvit_backbone_b0,efficientvit_backbone_b1,efficientvit_backbone_b2,efficientvit_backbone_b3

修改三

添加如下一行代码


修改四

找到七百多行大概把具体看图片,按照图片来修改就行,添加红框内的部分,注意没有()只是函数名。

  1. elif m in {efficientvit_backbone_b0, efficientvit_backbone_b1, efficientvit_backbone_b2, efficientvit_backbone_b3}:
  2. m = m()
  3. c2 = m.width_list # 返回通道列表
  4. backbone = True

修改五

下面的两个红框内都是需要改动的。

  1. if isinstance(c2, list):
  2. m_ = m
  3. m_.backbone = True
  4. else:
  5. m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
  6. t = str(m)[8:-2].replace('__main__.', '') # module type
  7. m.np = sum(x.numel() for x in m_.parameters()) # number params
  8. m_.i, m_.f, m_.type = i + 4 if backbone else i, f, t # attach index, 'from' index, type

修改六

如下的也需要修改,全部按照我的来。

代码如下把原先的代码替换了即可。

  1. save.extend(x % (i + 4 if backbone else i) for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
  2. layers.append(m_)
  3. if i == 0:
  4. ch = []
  5. if isinstance(c2, list):
  6. ch.extend(c2)
  7. if len(c2) != 5:
  8. ch.insert(0, 0)
  9. else:
  10. ch.append(c2)

修改七

修改七和前面的都不太一样,需要修改前向传播中的一个部分, 已经离开了parse_model方法了。

可以在图片中开代码行数,没有离开task.py文件都是同一个文件。 同时这个部分有好几个前向传播都很相似,大家不要看错了,是70多行左右的!!!,同时我后面提供了代码,大家直接复制粘贴即可,有时间我针对这里会出一个视频。

代码如下->

  1. def _predict_once(self, x, profile=False, visualize=False):
  2. """
  3. Perform a forward pass through the network.
  4. Args:
  5. x (torch.Tensor): The input tensor to the model.
  6. profile (bool): Print the computation time of each layer if True, defaults to False.
  7. visualize (bool): Save the feature maps of the model if True, defaults to False.
  8. Returns:
  9. (torch.Tensor): The last output of the model.
  10. """
  11. y, dt = [], [] # outputs
  12. for m in self.model:
  13. if m.f != -1: # if not from previous layer
  14. x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
  15. if profile:
  16. self._profile_one_layer(m, x, dt)
  17. if hasattr(m, 'backbone'):
  18. x = m(x)
  19. if len(x) != 5: # 0 - 5
  20. x.insert(0, None)
  21. for index, i in enumerate(x):
  22. if index in self.save:
  23. y.append(i)
  24. else:
  25. y.append(None)
  26. x = x[-1] # 最后一个输出传给下一层
  27. else:
  28. x = m(x) # run
  29. y.append(x if m.i in self.save else None) # save output
  30. if visualize:
  31. feature_visualization(x, m.type, m.i, save_dir=visualize)
  32. return x

到这里就完成了修改部分,但是这里面细节很多,大家千万要注意不要替换多余的代码,导致报错,也不要拉下任何一部,都会导致运行失败,而且报错很难排查!!!很难排查!!!

五、EfficientViT2023yaml文件

**复制如下yaml文件进行运行!!! **

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
  8. s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
  9. m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
  10. l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
  11. x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
  12. # YOLOv8.0n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, efficientvit_backbone_b0, []] # 4
  16. - [-1, 1, SPPF, [1024, 5]] # 5
  17. # YOLOv8.0n head
  18. head:
  19. - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 6
  20. - [[-1, 3], 1, Concat, [1]] # 7 cat backbone P4
  21. - [-1, 3, C2f, [512]] # 8
  22. - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 9
  23. - [[-1, 2], 1, Concat, [1]] # 10 cat backbone P3
  24. - [-1, 3, C2f, [256]] # 11 (P3/8-small)
  25. - [-1, 1, Conv, [256, 3, 2]] # 12
  26. - [[-1, 8], 1, Concat, [1]] # 13 cat head P4
  27. - [-1, 3, C2f, [512]] # 14 (P4/16-medium)
  28. - [-1, 1, Conv, [512, 3, 2]] # 15
  29. - [[-1, 5], 1, Concat, [1]] # 16 cat head P5
  30. - [-1, 3, C2f, [1024]] # 17 (P5/32-large)
  31. - [[11, 14, 17], 1, Detect, [nc]] # Detect(P3, P4, P5)

六、成功运行记录

下面是成功运行的截图,已经完成了有1个epochs的训练,图片太大截不全第2个epochs了。


七、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv8改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,**目前本专栏免费阅读(暂时,大家尽早关注不迷路)**,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新

专栏回顾:YOLOv8改进系列专栏——本专栏持续复习各种顶会内容——科研必备****


本文转载自: https://blog.csdn.net/java1314777/article/details/134889610
版权归原作者 Snu77 所有, 如有侵权,请联系我们删除。

“YOLOv8改进 | 2023主干篇 | EfficientViT替换Backbone(高效的视觉变换网络)”的评论:

还没有评论