0


目标检测模型SSD的详细解释

目标检测由两个独立的任务组成,即分类和定位。R-CNN 系列目标检测器由两个阶段组成,分别是区域提议网络和分类和框细化头。然而,这种2阶段的检测模型已经基本被单阶段的模型替代了。在本文中,我想介绍 Single Shot MultiBox Detector (SSD)。

边界框回归

与 Faster R-CNN 一样,作者回归到默认边界框 (d) 的中心 (cx, cy) 及其宽度 (w) 和高度 (h) 的偏移量。因此,公式如下所示:

架构

上图展示了基于 VGG-16 作为主干的架构。我将通过将其分解为 3 个部分来解释该架构:主干、辅助卷积和预测卷积。为了您的方便,我还将提供一些代码。

基础网络

  1. class VGGBase(nn.Module):
  2. """
  3. VGG base convolutions to produce lower-level feature maps.
  4. """
  5. def __init__(self):
  6. super(VGGBase, self).__init__()
  7. # Standard convolutional layers in VGG16
  8. self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, padding=1) # stride = 1, by default
  9. self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
  10. self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
  11. self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
  12. self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, padding=1)
  13. self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
  14. self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
  15. self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
  16. self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
  17. self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) # ceiling (not floor) here for even dims
  18. self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, padding=1)
  19. self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
  20. self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
  21. self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2)
  22. self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
  23. self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
  24. self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
  25. self.pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1) # retains size because stride is 1 (and padding)
  26. # Replacements for FC6 and FC7 in VGG16
  27. self.conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6) # atrous convolution
  28. self.conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
  29. # Load pretrained layers
  30. self.load_pretrained_layers()
  31. def forward(self, image):
  32. """
  33. Forward propagation.
  34. :param image: images, a tensor of dimensions (N, 3, 300, 300)
  35. :return: lower-level feature maps conv4_3 and conv7
  36. """
  37. out = F.relu(self.conv1_1(image)) # (N, 64, 300, 300)
  38. out = F.relu(self.conv1_2(out)) # (N, 64, 300, 300)
  39. out = self.pool1(out) # (N, 64, 150, 150)
  40. out = F.relu(self.conv2_1(out)) # (N, 128, 150, 150)
  41. out = F.relu(self.conv2_2(out)) # (N, 128, 150, 150)
  42. out = self.pool2(out) # (N, 128, 75, 75)
  43. out = F.relu(self.conv3_1(out)) # (N, 256, 75, 75)
  44. out = F.relu(self.conv3_2(out)) # (N, 256, 75, 75)
  45. out = F.relu(self.conv3_3(out)) # (N, 256, 75, 75)
  46. out = self.pool3(out) # (N, 256, 38, 38), it would have been 37 if not for ceil_mode = True
  47. out = F.relu(self.conv4_1(out)) # (N, 512, 38, 38)
  48. out = F.relu(self.conv4_2(out)) # (N, 512, 38, 38)
  49. out = F.relu(self.conv4_3(out)) # (N, 512, 38, 38)
  50. conv4_3_feats = out # (N, 512, 38, 38)
  51. out = self.pool4(out) # (N, 512, 19, 19)
  52. out = F.relu(self.conv5_1(out)) # (N, 512, 19, 19)
  53. out = F.relu(self.conv5_2(out)) # (N, 512, 19, 19)
  54. out = F.relu(self.conv5_3(out)) # (N, 512, 19, 19)
  55. out = self.pool5(out) # (N, 512, 19, 19), pool5 does not reduce dimensions
  56. out = F.relu(self.conv6(out)) # (N, 1024, 19, 19)
  57. conv7_feats = F.relu(self.conv7(out)) # (N, 1024, 19, 19)
  58. # Lower-level feature maps
  59. return conv4_3_feats, conv7_feats

我想强调的是,以下示例是在假设输入图像的大小为 300 x 300 的情况下提供的,如原始论文中所示。

可以看出,我们正在使用一个简单且众所周知的 VGG-16 网络来提取 conv4_3 和 conv7 的特征。此外,我们可以注意到特征维度分别为 (N, 512, 38, 38) 和 (N, 1024, 19, 19)。我希望这部分足够简单明了,可以继续讨论 Axuliary Convolutions

Auxiliary Convolutions

  1. class AuxiliaryConvolutions(nn.Module):
  2. def __init__(self):
  3. super(AuxiliaryConvolutions, self).__init__()
  4. #input (N, 1024, 19, 19) that is conv7_feats
  5. # Auxiliary/additional convolutions on top of the VGG base
  6. self.conv8_1 = nn.Conv2d(1024, 256, kernel_size=1, padding=0) # stride = 1, by default
  7. self.conv8_2 = nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1) # dim. reduction because stride > 1
  8. self.conv9_1 = nn.Conv2d(512, 128, kernel_size=1, padding=0)
  9. self.conv9_2 = nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1) # dim. reduction because stride > 1
  10. self.conv10_1 = nn.Conv2d(256, 128, kernel_size=1, padding=0)
  11. self.conv10_2 = nn.Conv2d(128, 256, kernel_size=3, padding=0) # dim. reduction because padding = 0
  12. self.conv11_1 = nn.Conv2d(256, 128, kernel_size=1, padding=0)
  13. self.conv11_2 = nn.Conv2d(128, 256, kernel_size=3, padding=0) # dim. reduction because padding = 0
  14. # Initialize convolutions' parameters
  15. self.init_conv2d()
  16. def init_conv2d(self):
  17. """
  18. Initialize convolution parameters using xavier initialization.
  19. """
  20. for c in self.children():
  21. if isinstance(c, nn.Conv2d):
  22. nn.init.xavier_uniform_(c.weight)
  23. nn.init.constant_(c.bias, 0.)
  24. def forward(self, conv7_feats):
  25. """
  26. Forward propagation.
  27. :param conv7_feats: lower-level conv7 feature map, a tensor of dimensions (N, 1024, 19, 19)
  28. :return: higher-level feature maps (N, 512, 10, 10), (N, 256, 5, 5), (N, 256, 3, 3) and (N, 256, 1, 1)
  29. """
  30. out = F.relu(self.conv8_1(conv7_feats)) # (N, 256, 19, 19)
  31. out = F.relu(self.conv8_2(out)) # (N, 512, 10, 10)
  32. conv8_2_feats = out # (N, 512, 10, 10)
  33. out = F.relu(self.conv9_1(out)) # (N, 128, 10, 10)
  34. out = F.relu(self.conv9_2(out)) # (N, 256, 5, 5)
  35. conv9_2_feats = out # (N, 256, 5, 5)
  36. out = F.relu(self.conv10_1(out)) # (N, 128, 5, 5)
  37. out = F.relu(self.conv10_2(out)) # (N, 256, 3, 3)
  38. conv10_2_feats = out # (N, 256, 3, 3)
  39. out = F.relu(self.conv11_1(out)) # (N, 128, 3, 3)
  40. conv11_2_feats = F.relu(self.conv11_2(out)) # (N, 256, 1, 1)
  41. # Higher-level feature maps
  42. return conv8_2_feats, conv9_2_feats, conv10_2_feats, conv11_2_feats

Auxiliary Convolutions使我们能够在基础 VGG-16 网络之上获得附加功能。这些层的大小逐渐减小,并允许在多个尺度上进行检测预测。因此,我们传入网络的输入是从 VGG-16 网络获得的 conv7 特征。正如在应用卷积和 ReLU 激活函数时所看到的,我们应该保留中间特征,即 conv8_2、conv9_2、conv10_2 和 conv11_2。请花点时间查看代码和特征图的尺寸:)

选择默认边界框

这听起来可能很可怕,但不要担心,它仍然很容易掌握。默认边界框是手动选择的。每个特征图层都分配了一个比例值。例如,Conv4_3 以 0.2(有时为 0.1)的最小尺度检测对象,然后线性增加到 conv11_2(从辅助卷积获得)的 0.9 尺度。此外,我们可以注意到,我们正在考虑每个特征图中每个位置的定义数量的先验框。对于进行 4 次预测的层,SSD 使用 4 种不同的纵横比,分别为 1、2、0.5 和 sqrt(s_k * s_(k+1)),其中 s_k 是第 k 个特征图的比例值。一般定义为长宽比为1时计算出的附加比例,然后默认框的宽高计算如下:

现在,让我们用以下代码对其进行总结。

  1. def create_prior_boxes(self):
  2. """
  3. Create the 8732 prior (default) boxes for the SSD300, as defined in the paper.
  4. :return: prior boxes in center-size coordinates, a tensor of dimensions (8732, 4)
  5. """
  6. fmap_dims = {'conv4_3': 38,
  7. 'conv7': 19,
  8. 'conv8_2': 10,
  9. 'conv9_2': 5,
  10. 'conv10_2': 3,
  11. 'conv11_2': 1}
  12. obj_scales = {'conv4_3': 0.1,
  13. 'conv7': 0.2,
  14. 'conv8_2': 0.375,
  15. 'conv9_2': 0.55,
  16. 'conv10_2': 0.725,
  17. 'conv11_2': 0.9}
  18. """
  19. Note that we were considering four boxes in certain layers and six boxes in another layer. Now, if we want to
  20. have four boxes, we remove the {3, 1/3} aspect ratios, else we consider all of the six possible boxes
  21. """
  22. aspect_ratios = {'conv4_3': [1., 2., 0.5],
  23. 'conv7': [1., 2., 3., 0.5, .333],
  24. 'conv8_2': [1., 2., 3., 0.5, .333],
  25. 'conv9_2': [1., 2., 3., 0.5, .333],
  26. 'conv10_2': [1., 2., 0.5],
  27. 'conv11_2': [1., 2., 0.5]}
  28. fmaps = list(fmap_dims.keys())
  29. prior_boxes = []
  30. self.prior_boxes_info = []
  31. for k, fmap in enumerate(fmaps):
  32. for i in range(fmap_dims[fmap]):
  33. for j in range(fmap_dims[fmap]):
  34. cx = (j + 0.5) / fmap_dims[fmap]
  35. cy = (i + 0.5) / fmap_dims[fmap]
  36. for ratio in aspect_ratios[fmap]:
  37. prior_boxes.append([cx, cy, obj_scales[fmap] * sqrt(ratio), obj_scales[fmap] / sqrt(ratio)])
  38. self.prior_boxes_info.append([fmap, i, j, ratio])
  39. # For an aspect ratio of 1, use an additional prior whose scale is the geometric mean of the
  40. # scale of the current feature map and the scale of the next feature map
  41. if ratio == 1.:
  42. try:
  43. additional_scale = sqrt(obj_scales[fmap] * obj_scales[fmaps[k + 1]])
  44. # For the last feature map, there is no "next" feature map
  45. except IndexError:
  46. additional_scale = 1.
  47. prior_boxes.append([cx, cy, additional_scale, additional_scale])
  48. self.prior_boxes_info.append([fmap, i, j, ratio])
  49. prior_boxes = torch.FloatTensor(prior_boxes).to(self.device) # (8732, 4)
  50. prior_boxes.clamp_(0, 1) # (8732, 4)
  51. return prior_boxes

它为 SSD 做出的 8732 个预测返回 8732 个先验框。

预测卷积

  1. class PredictionConvolutions(nn.Module):
  2. """
  3. Convolutions to predict class scores and bounding boxes using lower and higher-level feature maps.
  4. The bounding boxes are predicted as encoded offsets w.r.t each of the 8732 anchor boxes.
  5. See 'cxcy_to_gcxgcy' in utils.py for the encoding definition.
  6. The class scores represent the scores of each object class in each of the 8732 bounding boxes located.
  7. A high score for 'background' = no object.
  8. """
  9. def __init__(self, n_classes):
  10. """
  11. :param n_classes: number of different types of objects
  12. """
  13. super(PredictionConvolutions, self).__init__()
  14. self.n_classes = n_classes
  15. # Number of prior-boxes we are considering per position in each feature map
  16. n_boxes = {'conv4_3': 4,
  17. 'conv7': 6,
  18. 'conv8_2': 6,
  19. 'conv9_2': 6,
  20. 'conv10_2': 4,
  21. 'conv11_2': 4}
  22. # 4 prior-boxes implies we use 4 different aspect ratios, etc.
  23. # Localization prediction convolutions (predict offsets w.r.t prior-boxes)
  24. self.loc_conv4_3 = nn.Conv2d(512, n_boxes['conv4_3'] * 4, kernel_size=3, padding=1)
  25. self.loc_conv7 = nn.Conv2d(1024, n_boxes['conv7'] * 4, kernel_size=3, padding=1)
  26. self.loc_conv8_2 = nn.Conv2d(512, n_boxes['conv8_2'] * 4, kernel_size=3, padding=1)
  27. self.loc_conv9_2 = nn.Conv2d(256, n_boxes['conv9_2'] * 4, kernel_size=3, padding=1)
  28. self.loc_conv10_2 = nn.Conv2d(256, n_boxes['conv10_2'] * 4, kernel_size=3, padding=1)
  29. self.loc_conv11_2 = nn.Conv2d(256, n_boxes['conv11_2'] * 4, kernel_size=3, padding=1)
  30. # Class prediction convolutions (predict classes in localization boxes)
  31. self.cl_conv4_3 = nn.Conv2d(512, n_boxes['conv4_3'] * n_classes, kernel_size=3, padding=1)
  32. self.cl_conv7 = nn.Conv2d(1024, n_boxes['conv7'] * n_classes, kernel_size=3, padding=1)
  33. self.cl_conv8_2 = nn.Conv2d(512, n_boxes['conv8_2'] * n_classes, kernel_size=3, padding=1)
  34. self.cl_conv9_2 = nn.Conv2d(256, n_boxes['conv9_2'] * n_classes, kernel_size=3, padding=1)
  35. self.cl_conv10_2 = nn.Conv2d(256, n_boxes['conv10_2'] * n_classes, kernel_size=3, padding=1)
  36. self.cl_conv11_2 = nn.Conv2d(256, n_boxes['conv11_2'] * n_classes, kernel_size=3, padding=1)
  37. # Initialize convolutions' parameters
  38. self.init_conv2d()
  39. def init_conv2d(self):
  40. """
  41. Initialize convolution parameters using xavier initialization.
  42. """
  43. for c in self.children():
  44. if isinstance(c, nn.Conv2d):
  45. nn.init.xavier_uniform_(c.weight)
  46. nn.init.constant_(c.bias, 0.)
  47. def forward(self, conv4_3_feats, conv7_feats, conv8_2_feats, conv9_2_feats, conv10_2_feats, conv11_2_feats):
  48. batch_size = conv4_3_feats.size(0)
  49. #predict boxes
  50. l_conv4_3 = self.loc_conv4_3(conv4_3_feats) # (N, 16, 38, 38)
  51. l_conv4_3 = l_conv4_3.permute(0, 2, 3, 1).contiguous() # (N, 38, 38, 16)
  52. l_conv4_3 = l_conv4_3.view(batch_size, -1, 4) # (N, 5776, 4), there are a total 5776 boxes on this feature map
  53. l_conv7 = self.loc_conv7(conv7_feats) # (N, 24, 19, 19)
  54. l_conv7 = l_conv7.permute(0, 2, 3, 1).contiguous() # (N, 19, 19, 24)
  55. l_conv7 = l_conv7.view(batch_size, -1, 4) # (N, 2166, 4)
  56. l_conv8_2 = self.loc_conv8_2(conv8_2_feats) # (N, 24, 10, 10)
  57. l_conv8_2 = l_conv8_2.permute(0, 2, 3, 1).contiguous() # (N, 10, 10, 24)
  58. l_conv8_2 = l_conv8_2.view(batch_size, -1, 4) # (N, 600, 4)
  59. l_conv9_2 = self.loc_conv9_2(conv9_2_feats) # (N, 24, 5, 5)
  60. l_conv9_2 = l_conv9_2.permute(0, 2, 3, 1).contiguous() # (N, 5, 5, 24)
  61. l_conv9_2 = l_conv9_2.view(batch_size, -1, 4) # (N, 150, 4)
  62. l_conv10_2 = self.loc_conv10_2(conv10_2_feats) # (N, 16, 3, 3)
  63. l_conv10_2 = l_conv10_2.permute(0, 2, 3, 1).contiguous() # (N, 3, 3, 16)
  64. l_conv10_2 = l_conv10_2.view(batch_size, -1, 4) # (N, 36, 4)
  65. l_conv11_2 = self.loc_conv11_2(conv11_2_feats) # (N, 16, 1, 1)
  66. l_conv11_2 = l_conv11_2.permute(0, 2, 3, 1).contiguous() # (N, 1, 1, 16)
  67. l_conv11_2 = l_conv11_2.view(batch_size, -1, 4) # (N, 4, 4)
  68. # Predict classes
  69. c_conv4_3 = self.cl_conv4_3(conv4_3_feats) # (N, 4 * n_classes, 38, 38)
  70. c_conv4_3 = c_conv4_3.permute(0, 2, 3, 1).contiguous() # (N, 38, 38, 4 * n_classes)
  71. c_conv4_3 = c_conv4_3.view(batch_size, -1, self.n_classes) # (N, 5776, n_classes), there are a total 5776 boxes on this feature map
  72. c_conv7 = self.cl_conv7(conv7_feats) # (N, 6 * n_classes, 19, 19)
  73. c_conv7 = c_conv7.permute(0, 2, 3, 1).contiguous() # (N, 19, 19, 6 * n_classes)
  74. c_conv7 = c_conv7.view(batch_size, -1, self.n_classes) # (N, 2166, n_classes)
  75. c_conv8_2 = self.cl_conv8_2(conv8_2_feats) # (N, 6 * n_classes, 10, 10)
  76. c_conv8_2 = c_conv8_2.permute(0, 2, 3, 1).contiguous() # (N, 10, 10, 6 * n_classes)
  77. c_conv8_2 = c_conv8_2.view(batch_size, -1, self.n_classes) # (N, 600, n_classes)
  78. c_conv9_2 = self.cl_conv9_2(conv9_2_feats) # (N, 6 * n_classes, 5, 5)
  79. c_conv9_2 = c_conv9_2.permute(0, 2, 3, 1).contiguous() # (N, 5, 5, 6 * n_classes)
  80. c_conv9_2 = c_conv9_2.view(batch_size, -1, self.n_classes) # (N, 150, n_classes)
  81. c_conv10_2 = self.cl_conv10_2(conv10_2_feats) # (N, 4 * n_classes, 3, 3)
  82. c_conv10_2 = c_conv10_2.permute(0, 2, 3, 1).contiguous() # (N, 3, 3, 4 * n_classes)
  83. c_conv10_2 = c_conv10_2.view(batch_size, -1, self.n_classes) # (N, 36, n_classes)
  84. c_conv11_2 = self.cl_conv11_2(conv11_2_feats) # (N, 4 * n_classes, 1, 1)
  85. c_conv11_2 = c_conv11_2.permute(0, 2, 3, 1).contiguous() # (N, 1, 1, 4 * n_classes)
  86. c_conv11_2 = c_conv11_2.view(batch_size, -1, self.n_classes) # (N, 4, n_classes)
  87. # A total of 8732 boxes
  88. # Concatenate in this specific order
  89. locs = torch.cat([l_conv4_3, l_conv7, l_conv8_2, l_conv9_2, l_conv10_2, l_conv11_2], dim=1) # (N, 8732, 4)
  90. classes_scores = torch.cat([c_conv4_3, c_conv7, c_conv8_2, c_conv9_2, c_conv10_2, c_conv11_2], dim=1) # (N, 8732, n_classes)
  91. return locs, classes_scores

这可能看起来很复杂,但它基本上获得了我们从基础 VGG-16 和辅助卷积中获得的所有特征图,并应用卷积层来预测每个特征图的类别和边界框。

组合成完整代码

现在让我们把它们放在一起,看看最终的架构,如下所示。

  1. class SSD300(nn.Module):
  2. """
  3. The SSD300 network - encapsulates the base VGG network, auxiliary, and prediction convolutions.
  4. """
  5. def __init__(self, n_classes, device):
  6. super(SSD300, self).__init__()
  7. self.n_classes = n_classes
  8. self.device = device
  9. self.base = VGGBase()
  10. self.aux_convs = AuxiliaryConvolutions()
  11. self.pred_convs = PredictionConvolutions(n_classes)
  12. # Since lower level features (conv4_3_feats) have considerably larger scales, we take the L2 norm and rescale
  13. # Rescale factor is initially set at 20, but is learned for each channel during back-prop
  14. self.rescale_factors = nn.Parameter(torch.FloatTensor(1, 512, 1, 1)) # there are 512 channels in conv4_3_feats
  15. nn.init.constant_(self.rescale_factors, 20)
  16. # Prior boxes
  17. self.priors_cxcy = self.create_prior_boxes()
  18. self.to(device)
  19. def forward(self, image):
  20. """
  21. Forward propagation.
  22. :param image: images, a tensor of dimensions (N, 3, 300, 300)
  23. :return: 8732 locations and class scores (i.e. w.r.t each prior box) for each image
  24. """
  25. # Run VGG base network convolutions
  26. conv4_3_feats, conv7_feats = self.base(image) # (N, 512, 38, 38), (N, 1024, 19, 19)
  27. # Rescale conv4_3 after L2 norm
  28. norm = conv4_3_feats.pow(2).sum(dim=1, keepdim=True).sqrt() # (N, 1, 38, 38)
  29. conv4_3_feats = conv4_3_feats / norm # (N, 512, 38, 38)
  30. conv4_3_feats = conv4_3_feats * self.rescale_factors # (N, 512, 38, 38)
  31. # Run auxiliary convolutions
  32. # (N, 512, 10, 10), (N, 256, 5, 5), (N, 256, 3, 3), (N, 256, 1, 1)
  33. conv8_2_feats, conv9_2_feats, conv10_2_feats, conv11_2_feats = self.aux_convs(conv7_feats)
  34. # Run prediction convolutions
  35. # (N, 8732, 4), (N, 8732, n_classes)
  36. locs, classes_scores = self.pred_convs(conv4_3_feats, conv7_feats, conv8_2_feats, conv9_2_feats, conv10_2_feats, conv11_2_feats)
  37. return locs, classes_scores

请注意,较低级别的特征(conv4_3_feats)具有相当大的尺度,因此我们采用 L2 范数并重新调整它。重新缩放因子最初设置为 20,但在反向传播期间为每个通道学习。

损失函数

可以看出, 定位损失是 L1 平滑损失,而分类损失是众所周知的交叉熵损失。

匹配策略

在训练期间,我们需要确定哪些生成的先验框应该与我们要包含在损失计算中的地面实况框相对应。因此,我们将每个真实框与具有最高 Jaccard 重叠的先验框进行匹配。此外,我们还选择了重叠至少为 0.5 的先验框,以允许网络预测多个重叠框的高分。

在匹配步骤之后,大多数先验/默认框用作负样本。然而,为了避免正负样本之间的不平衡,我们最多保持 3:1 的比例,因为这样可以更快地优化和稳定学习。再一次,定位损失仅在正(非背景)先验上计算。

最后总结

我希望我设法使 SSD 易于理解和掌握。我尝试使用代码,以便您能够将过程可视化。花点时间去理解它。此外,如果您尝试自己使用它会更好。下次我将写关于 YOLO 系列物体检测器的文章。

作者:Chingis Oinar

原文地址:https://medium.com/mlearning-ai/object-detection-explained-single-shot-multibox-detector-c45e6a7af40

标签:

“目标检测模型SSD的详细解释”的评论:

还没有评论