0


【Datawhale AI 夏令营2024--CV】Task1_baseline解读和尝试

一、赛题任务

    根据给定的视频监控数据集,进行城市违规行为的检测。违规行为主要包括垃圾桶满溢、机动车违停、非机动车违停等。选手需要能够从视频中**分析并标记出违规行为**,**提供**违规行为发生的**时间**和**位置**信息。

二、数据集

1、观察数据集

数据集分为三类:测试集,训练集(无标注),训练集(有标注第一批)。

测试集和训练集都为MP4格式,标注为json格式

json文件是视频每一帧检测到的违法行为:

  • frame_id:违规行为出现的帧编号
  • event_id:违规行为ID
  • category:违规行为类别
  • bbox:检测到的违规行为矩形框的坐标,[xmin,ymin,xmax,ymax]形式 [ { "frame_id": 20, "event_id": 1, "category": "机动车违停", "bbox": [200, 300, 280, 400] }, { "frame_id": 20, "event_id": 2, "category": "机动车违停", "bbox": [600, 500, 720, 560] }, { "frame_id": 30, "event_id": 3, "category": "垃圾桶满溢", "bbox": [400, 500, 600, 660] } ]

2、分析数据集

video_path = '训练集(有标注第一批)/视频/45.mp4'
cap = cv2.VideoCapture(video_path) # 读取视频
while True:
    ret, frame = cap.read()  # 返回的是是否读取成功 以及当前帧
    if not ret:
        break
    break  # 读取当前帧返回
frame.shape #查看当前帧的大小

frame.shape == (1080, 1920, 3) -->hwc

train_anno = json.load(open('训练集(有标注第一批)/标注/45.json', encoding='utf-8'))
train_anno[0], len(train_anno)

({'frame_id': 0,
'event_id': 1,
'category': '机动车违停',
'bbox': [680, 797, 1448, 1078]},
9000) # 找到bbox方便下面提取特征

bbox = [746, 494, 988, 786]

pt1 = (bbox[0], bbox[1])
pt2 = (bbox[2], bbox[3])

color = (0, 255, 0) 
thickness = 2  # 线条粗细

cv2.rectangle(frame, pt1, pt2, color, thickness)

frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(frame)
#释放内存
cv2.destroyAllWindows()

# 释放摄像头
cap.release()

利用opecv以及matplotlib来进行图片特征提取的展示

3、数据集转换

for anno_path, video_path in zip(train_annos[:5], train_videos[:5]):
    print(video_path)
    anno_df = pd.read_json(anno_path)
    cap = cv2.VideoCapture(video_path)
    frame_idx = 0 
    while True:
        ret, frame = cap.read()
        if not ret:
            break

        img_height, img_width = frame.shape[:2]
        
        frame_anno = anno_df[anno_df['frame_id'] == frame_idx]
        cv2.imwrite('./yolo-dataset/train/' + anno_path.split('\\')[-1][:-5] + '_' + str(frame_idx) + '.jpg', frame)

        if len(frame_anno) != 0:
            with open('./yolo-dataset/train/' + anno_path.split('\\')[-1][:-5] + '_' + str(frame_idx) + '.txt', 'w') as up:
                for category, bbox in zip(frame_anno['category'].values, frame_anno['bbox'].values):
                    category_idx = category_labels.index(category)  # 索引位置

                    x_min, y_min, x_max, y_max = bbox
                    x_center = (x_min + x_max) / 2 / img_width
                    y_center = (y_min + y_max) / 2 / img_height
                    width = (x_max - x_min) / img_width
                    height = (y_max - y_min) / img_height

                    if x_center > 1:
                        print(bbox)
                    up.write(f'{category_idx} {x_center} {y_center} {width} {height}\n')

        frame_idx += 1

将提取标注的数据集内容写入**./yolo-dataset/train/'**文件夹下

for anno_path, video_path in zip(train_annos[-3:], train_videos[-3:]):
    print(video_path)
    anno_df = pd.read_json(anno_path)
    cap = cv2.VideoCapture(video_path)
    frame_idx = 0
    while True:
        ret, frame = cap.read()
        if not ret:
            break

        img_height, img_width = frame.shape[:2]

        frame_anno = anno_df[anno_df['frame_id'] == frame_idx]
        cv2.imwrite('./yolo-dataset/val/' + anno_path.split('/')[-1][:-5] + '_' + str(frame_idx) + '.jpg', frame)

        if len(frame_anno) != 0:
            with open('./yolo-dataset/val/' + anno_path.split('/')[-1][:-5] + '_' + str(frame_idx) + '.txt', 'w') as up:
                for category, bbox in zip(frame_anno['category'].values, frame_anno['bbox'].values):
                    category_idx = category_labels.index(category)

                    x_min, y_min, x_max, y_max = bbox
                    x_center = (x_min + x_max) / 2 / img_width
                    y_center = (y_min + y_max) / 2 / img_height
                    width = (x_max - x_min) / img_width
                    height = (y_max - y_min) / img_height

                    up.write(f'{category_idx} {x_center} {y_center} {width} {height}\n')

        frame_idx += 1

和上个代码一样将提取标注的数据集内容写入**./yolo-dataset/val/**'文件夹下

俩个数据集是提供给yolo模型进行训练

三、模型

import os

os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import warnings

warnings.filterwarnings('ignore')

from ultralytics import YOLO

model = YOLO("yolov8n.pt")
results = model.train(data="yolo-dataset/yolo.yaml", epochs=2, imgsz=1080, batch=16)

baseline用的是yolov8的模型,从"yolo-dataset/yolo.yaml"中读取路径从而读取数据就是上面的俩个数据集,epoch是迭代次数,imgsz是将图片设置为1080的大小,从上文可知改监控每一帧大小为1080,batch是每一个的样本数。该段代码就是利用已经知道的数据集进行训练,得到权重

from ultralytics import YOLO

model = YOLO("runs/detect/train/weights/best.pt")
import glob

for path in glob.glob('测试集/*.mp4'):
    submit_json = []
    results = model(path, conf=0.05, imgsz=1080, verbose=False)
    for idx, result in enumerate(results):
        boxes = result.boxes  # Boxes object for bounding box outputs
        masks = result.masks  # Masks object for segmentation masks outputs
        keypoints = result.keypoints  # Keypoints object for pose outputs
        probs = result.probs  # Probs object for classification outputs
        obb = result.obb  # Oriented boxes object for OBB outputs

        if len(boxes.cls) == 0:
            continue

        xywh = boxes.xyxy.data.cpu().numpy().round()
        cls = boxes.cls.data.cpu().numpy().round() # 类别
        conf = boxes.conf.data.cpu().numpy()# 置信度
        for i, (ci, xy, confi) in enumerate(zip(cls, xywh, conf)):
            submit_json.append(
                {
                    'frame_id': idx,
                    'event_id': i + 1,
                    'category': category_labels[int(ci)],
                    'bbox': list([int(x) for x in xy]),
                    "confidence": float(confi)
                }
            )

    with open('./result/' + path.split('/')[-1][:-4] + '.json', 'w', encoding='utf-8') as up:
        json.dump(submit_json, up, indent=4, ensure_ascii=False)

上面代码就是读取之前训练的权重,进行预测,然后将得到的数据写入json文件中

下面的代码其实我现在也不知道有什么用

        masks = result.masks  # Masks object for segmentation masks outputs
        keypoints = result.keypoints  # Keypoints object for pose outputs
        probs = result.probs  # Probs object for classification outputs
        obb = result.obb  # Oriented boxes object for OBB outputs

四、尝试

1、尝试一

在解读这篇代码的时候,发现他只训练的二次,于是我就将他改成30次迭代即:

results = model.train(data="yolo-dataset/yolo.yaml", epochs=2, imgsz=1080, batch=16)-->epochs=2

效果比较好

2、尝试二

之后我就在想能不能在加大一点,但是效果不尽人意:应该是过拟合了

3、尝试三

之后发现好像是数据集用的太少了,然后我将数据集全部使用但是直接卡死了,用30个数据集也不行,之后我用了14个视频,然后将迭代次数也是调为30,分数还是理想的。

五、下一步

    因为我是个小白,找不到一些更适合的模型,所以我应该还是会在数据集上做工作,先彼此将数据集给转换,然后再将未标注的数据集给标注,在进行训练尝试。之后再看task2,与task3给的提示来尝试
标签: python yolov8

本文转载自: https://blog.csdn.net/2301_79866457/article/details/141502918
版权归原作者 正气侠 所有, 如有侵权,请联系我们删除。

“【Datawhale AI 夏令营2024--CV】Task1_baseline解读和尝试”的评论:

还没有评论