0


DETR训练自己的数据集

目录

重要的参考链接

  • 视频学习-关于DETR的讲解合集:DETR源码讲解:训练自己的数据集(这个小姐姐讲的很清楚,还有另外一个视频关于Deformable Detr的,Deformable Detr 论文思想讲解(一听就会))- 这小姐姐自己写了一个预测代码,但是还没有公布出来的,我在这个博客中看到有分享predict.py ,可以好好看一下:windows10复现DEtection TRansformers(DETR)并实现自己的数据集(这个博客是真的详细,可能视频中的小姐姐就是参照的这个博客,里面的预测代码大概率也是来自于这里)
  • 视频学习-跟着李沐学AI的论文精度:DETR 论文精读【论文精读】(有3篇相关的B站笔记,可以去看一下)
  • 这个注意力机制要好好学一下,跟那个生物机制很像:详解可变形注意力模块(Deformable Attention Module)

重点

  • 标签格式是COCO类型的json文件,暂时可以先参考这个VOC格式数据集转为COCO格式数据集脚本,而且必须要命名为./instances_train2017.json./instances_val2017.json
  • DETR对小目标不友好,检测大目标倒是可以
  • DETR在精度上没有比过当时的SOTA,能这么被喜爱是因为它的论文思想很精妙,真正实现了end-to-end

训练自己的代码,参考

  • 视频学习-关于DETR的讲解合集:DETR源码讲解:训练自己的数据集
  • 【DETR】训练自己的数据集-实践笔记
  • DETR训练自己的数据集
  • windows10复现DEtection TRansformers(DETR)并实现自己的数据集

暂存

  • 目标检测算法:Cascade RCNN | 视频讲解

缺点

  1. DETR需要很多的epoch才能够收敛
  2. 小目标性能不好
  3. 增大尺度或者使用多尺度,会增加计算量
  4. 注意力模块比较稀疏,收敛比较慢

第一步:更改权重文件

  • 先下载detr-r50-e632da11.pth权重,点击即可下载👉https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth
  • 再运行以下代码(其中,num_class:假如json文件中类别id的最大数值为90,则num_class应当被设置为90+1。最大值90可以通过此方式查找:在json文件中Ctrl+F检索定位到最后一个supercategory,查看id值即可。下图展示的是视频1中定位的COCO数据集中的最大类别编号为90)

在这里插入图片描述

这是在视频下的回复:

在这里插入图片描述

  1. import torch
  2. # 下载地址: https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth
  3. pretrained_weights = torch.load('./detr-r50-e632da11.pth')
  4. num_classes =5
  5. pretrained_weights['model']['class_embed.weight'].resize_(num_classes +1,256)
  6. pretrained_weights['model']['class_embed.bias'].resize_(num_classes +1)
  7. torch.save(pretrained_weights,'detr-r50_%d.pth'% num_classes)

第二步:将数据集整理为coco数据集的格式

代码参考自:windows10复现DEtection TRansformers(DETR)并实现自己的数据集

以下暂存我自己改了一点的代码,就是只管了将xml格式转为json文件,没有管将图片移动的事:

  1. # coding:utf-8# conference link: https://blog.csdn.net/w1520039381/article/details/118905718# pip install lxmlimport os
  2. import glob
  3. import json
  4. import shutil
  5. import numpy as np
  6. import xml.etree.ElementTree as ET
  7. path2 ="C:/Users/Desktop/VOC2007"
  8. START_BOUNDING_BOX_ID =1defget(root, name):return root.findall(name)defget_and_check(root, name, length):vars= root.findall(name)iflen(vars)==0:raise NotImplementedError('Can not find %s in %s.'%(name, root.tag))if length >0andlen(vars)!= length:raise NotImplementedError('The size of %s is supposed to be %d, but is %d.'%(name, length,len(vars)))if length ==1:vars=vars[0]returnvarsdefconvert(xml_list, json_file):
  9. json_dict ={"images":[],"type":"instances","annotations":[],"categories":[]}
  10. categories = pre_define_categories.copy()
  11. bnd_id = START_BOUNDING_BOX_ID
  12. all_categories ={}for index, line inenumerate(xml_list):# print("Processing %s"%(line))
  13. xml_f = line
  14. tree = ET.parse(xml_f)
  15. root = tree.getroot()
  16. filename = os.path.basename(xml_f)[:-4]+".jpg"
  17. image_id =20190000001+ index
  18. size = get_and_check(root,'size',1)
  19. width =int(get_and_check(size,'width',1).text)
  20. height =int(get_and_check(size,'height',1).text)
  21. image ={'file_name': filename,'height': height,'width': width,'id': image_id}
  22. json_dict['images'].append(image)## Cruuently we do not support segmentation# segmented = get_and_check(root, 'segmented', 1).text# assert segmented == '0'for obj in get(root,'object'):
  23. category = get_and_check(obj,'name',1).text
  24. if category in all_categories:# 记录类别个数
  25. all_categories[category]+=1else:
  26. all_categories[category]=1if category notin categories:if only_care_pre_define_categories:# 只关注特定的类别,也就是遇到定义好的类别之外的类别一律不管continue
  27. new_id =len(categories)+1print("[warning] category '{}' not in 'pre_define_categories'({}), create new id: {} automatically".format(
  28. category, pre_define_categories, new_id))
  29. categories[category]= new_id
  30. category_id = categories[category]
  31. bndbox = get_and_check(obj,'bndbox',1)
  32. xmin =int(float(get_and_check(bndbox,'xmin',1).text))
  33. ymin =int(float(get_and_check(bndbox,'ymin',1).text))
  34. xmax =int(float(get_and_check(bndbox,'xmax',1).text))
  35. ymax =int(float(get_and_check(bndbox,'ymax',1).text))assert(xmax > xmin),"xmax <= xmin, {}".format(line)assert(ymax > ymin),"ymax <= ymin, {}".format(line)
  36. o_width =abs(xmax - xmin)
  37. o_height =abs(ymax - ymin)
  38. ann ={'area': o_width * o_height,'iscrowd':0,'image_id':
  39. image_id,'bbox':[xmin, ymin, o_width, o_height],'category_id': category_id,'id': bnd_id,'ignore':0,'segmentation':[]}
  40. json_dict['annotations'].append(ann)
  41. bnd_id = bnd_id +1for cate, cid in categories.items():
  42. cat ={'supercategory':'none','id': cid,'name': cate}
  43. json_dict['categories'].append(cat)
  44. json_fp =open(json_file,'w')
  45. json_str = json.dumps(json_dict)
  46. json_fp.write(json_str)
  47. json_fp.close()print("------------create {} done--------------".format(json_file))print("find {} categories: {} -->>> your pre_define_categories {}: {}".format(len(all_categories),
  48. all_categories.keys(),len(pre_define_categories),
  49. pre_define_categories.keys()))print("category: id --> {}".format(categories))print(categories.keys())print(categories.values())if __name__ =='__main__':
  50. classes =['D00','D10','D20','D40']
  51. pre_define_categories ={}for i, cls inenumerate(classes):
  52. pre_define_categories[cls]= i +1# pre_define_categories = {'a1': 1, 'a3': 2, 'a6': 3, 'a9': 4, "a10": 5} ##
  53. only_care_pre_define_categories =True# only_care_pre_define_categories = False ### train_ratio = 0.9
  54. save_json_train ='instances_train2017.json'
  55. save_json_val ='instances_val2017.json'
  56. xml_dir =r"F:\A_Publicdatasets\RDD2022_released_through_CRDDC2022\RDD2022\A_unitedataset\annotations"
  57. xml_list_train = glob.glob(xml_dir +"/train/*.xml")
  58. xml_list_val = glob.glob(xml_dir +"/val/*.xml")# xml_list = np.sort(xml_list)# np.random.seed(100)# np.random.shuffle(xml_list)# train_num = int(len(xml_list) * train_ratio)# xml_list_train = xml_list[:train_num]# xml_list_val = xml_list[train_num:]
  59. convert(xml_list_train, os.path.join(xml_dir, save_json_train))
  60. convert(xml_list_val, os.path.join(xml_dir, save_json_val))# if os.path.exists(path2 + "/annotations"):# shutil.rmtree(path2 + "/annotations")# os.makedirs(path2 + "/annotations")# if os.path.exists(path2 + "/images/train2014"):# shutil.rmtree(path2 + "/images/train2014")# os.makedirs(path2 + "/images/train2014")# if os.path.exists(path2 + "/images/val2014"):# shutil.rmtree(path2 + "/images/val2014")# os.makedirs(path2 + "/images/val2014")## f1 = open("train.txt", "w")# for xml in xml_list_train:# img = xml[:-4] + ".jpg"# f1.write(os.path.basename(xml)[:-4] + "\n")# shutil.copyfile(img, path2 + "/images/train2014/" + os.path.basename(img))## f2 = open("test.txt", "w")# for xml in xml_list_val:# img = xml[:-4] + ".jpg"# f2.write(os.path.basename(xml)[:-4] + "\n")# shutil.copyfile(img, path2 + "/images/val2014/" + os.path.basename(img))# f1.close()# f2.close()# print("-------------------------------")# print("train number:", len(xml_list_train))# print("val number:", len(xml_list_val))

第三步:更改detr.py

在这里插入图片描述

第四步:在终端设置训练参数进行训练

注意:如果是在windows下面跑的话,

  1. num_workers

应该设置成

  1. 0
  1. python main.py --dataset_file "coco"--coco_path data/coco --epochs 100--lr=1e-4--batch_size=2--num_workers=4--output_dir="outputs"--resume="detr-r50_3.pth"

第五步:检测效果,但是没有没有打印出来那些map指标

⭐来自博客:windows10复现DEtection TRansformers(DETR)并实现自己的数据集

其中要改的地方有:

  1. 102行左右的model = detr_resnet50(False, 5)中的5改为本博客第一步:更改权重文件的 num_class,否则会报错通道数不匹配在这里插入图片描述
  2. 103行左右state_dict = torch.load后面改为训练好后的checkpoint.pth地址
  3. 108行左右im = Image.open后面改为待检测的图片地址(注意,现在只能检测单张,且没有实现保存图片,需要自己改下代码)
  4. 20行左右的CLASSES后面的数组值按顺序写成自己的检测类别名
  5. 93行左右的keep = probas.max(-1).values > 0.7中的0.7可以调大调小,应该是confidence的作用,也就是值越高的话,显示出来的框就会越少
  1. import math
  2. from PIL import Image
  3. import requests
  4. import matplotlib.pyplot as plt
  5. # import ipywidgets as widgets# from IPython.display import display, clear_outputimport torch
  6. from torch import nn
  7. from torchvision.models import resnet50
  8. import torchvision.transforms as T
  9. from hubconf import*from util.misc import nested_tensor_from_tensor_list
  10. torch.set_grad_enabled(False)# COCO classes
  11. CLASSES =['D00','D10','D20','D40']# colors for visualization
  12. COLORS =[[0.000,0.447,0.741],[0.850,0.325,0.098]]# standard PyTorch mean-std input image normalization
  13. transform = T.Compose([
  14. T.Resize(800),
  15. T.ToTensor(),
  16. T.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])# for output bounding box post-processingdefbox_cxcywh_to_xyxy(x):
  17. x_c, y_c, w, h = x.unbind(1)
  18. b =[(x_c -0.5* w),(y_c -0.5* h),(x_c +0.5* w),(y_c +0.5* h)]return torch.stack(b, dim=1)defrescale_bboxes(out_bbox, size):
  19. img_w, img_h = size
  20. b = box_cxcywh_to_xyxy(out_bbox)
  21. b = b * torch.tensor([img_w, img_h, img_w, img_h], dtype=torch.float32)return b
  22. defplot_results(pil_img, prob, boxes):
  23. plt.figure(figsize=(16,10))
  24. plt.imshow(pil_img)
  25. ax = plt.gca()
  26. colors = COLORS *100for p,(xmin, ymin, xmax, ymax), c inzip(prob, boxes.tolist(), colors):
  27. ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,
  28. fill=False, color=c, linewidth=3))
  29. cl = p.argmax()
  30. text =f'{CLASSES[cl]}: {p[cl]:0.2f}'
  31. ax.text(xmin, ymin, text, fontsize=15,
  32. bbox=dict(facecolor='yellow', alpha=0.5))
  33. plt.axis('off')
  34. plt.show()defdetect(im, model, transform):# mean-std normalize the input image (batch-size: 1)
  35. img = transform(im).unsqueeze(0)# propagate through the model
  36. outputs = model(img)# keep only predictions with 0.7+ confidence
  37. probas = outputs['pred_logits'].softmax(-1)[0,:,:-1]
  38. keep = probas.max(-1).values >0.00001# convert boxes from [0; 1] to image scales
  39. bboxes_scaled = rescale_bboxes(outputs['pred_boxes'][0, keep], im.size)return probas[keep], bboxes_scaled
  40. defpredict(im, model, transform):# mean-std normalize the input image (batch-size: 1)
  41. anImg = transform(im)
  42. data = nested_tensor_from_tensor_list([anImg])# propagate through the model
  43. outputs = model(data)# keep only predictions with 0.7+ confidence
  44. probas = outputs['pred_logits'].softmax(-1)[0,:,:-1]
  45. keep = probas.max(-1).values >0.7# 0.7 好像是调整置信度的# print(probas[keep])# convert boxes from [0; 1] to image scales
  46. bboxes_scaled = rescale_bboxes(outputs['pred_boxes'][0, keep], im.size)return probas[keep], bboxes_scaled
  47. if __name__ =="__main__":
  48. model = detr_resnet50(False,5)# 这里与前面的num_classes数值相同,就是最大的category id + 1
  49. state_dict = torch.load(r"G:\pycharmprojects\detr-main\output\checkpoint.pth", map_location='cpu')
  50. model.load_state_dict(state_dict["model"])
  51. model.eval()# im = Image.open('data/coco/train2017/001554.jpg')
  52. im = Image.open(r'F:\A_Publicdatasets\RDD2022_released_through_CRDDC2022\RDD2022\A_unitedataset\images\val\China_Drone_000038.jpg')
  53. scores, boxes = predict(im, model, transform)
  54. plot_results(im, scores, boxes)

本文转载自: https://blog.csdn.net/LWD19981223/article/details/129784674
版权归原作者 孟孟单单 所有, 如有侵权,请联系我们删除。

“DETR训练自己的数据集”的评论:

还没有评论