0


教你用300行Python代码实现一个人脸识别系统

用300行Python代码实现一个人脸识别系统

最近又多了不少朋友关注,先在这里谢谢大家。关注我的朋友大多数都是大学生,而且我简单看了一下,低年级的大学生居多,大多数都是为了完成课程设计,作为一个过来人,还是希望大家平时能多抽出点时间学习一下,这种临时抱佛脚的策略要少用嗷。今天我们来python实现一个人脸识别系统,主要是借助了dlib这个库,相当于我们直接调用现成的库来进行人脸识别,就省去了之前教程中的数据收集和模型训练的步骤了。

B站视频:用300行代码实现人脸识别系统_哔哩哔哩_bilibili

CSDN博客:用300行Python代码实现一个人脸识别系统_dejahu的博客-CSDN博客

码云地址:face_dlib_py37_42: 用300行代码开发一个人脸识别系统-42 (gitee.com)

预编译dlib库下载地址:人脸识别系统+windows64位-dlib-19.17.0-cp37-cp37m-win_amd64.zip-深度学习文档类资源-CSDN文库

image-20220109232328902

注:直接安装dlib库可能会有编译错误,可以通过下列方式获取编译好的dlib库

  • 获取方式1: 直接从付费资源下载人脸识别系统+windows64位-dlib-19.17.0-cp37-cp37m-win_amd64.zip-深度学习文档类资源-CSDN文库
  • 获取方式2: 在B站视频三连并在评论区留下你的邮箱地址用300行代码实现人脸识别系统_哔哩哔哩_bilibili
  • **获取方式3:**在CSDN博客中三连并在评论区留下你的邮箱地址用300行Python代码实现一个人脸识别系统_dejahu的博客-CSDN博客

基本原理

人脸识别和目标检测这些还不太一样,比如大家传统的训练一个目标检测模型,你只有对这个目标训练了之后,你的模型才能找到这样的目标,比如你的目标检测模型如果是检测植物的,那显然就不能检测动物。但是人脸识别就不一样,以你的手机为例,你发现你只录入了一次你的人脸信息,不需要训练,他就能准确的识别你,这里识别的原理是通过人脸识别的模型提取你脸部的特征向量,然后将实时检测到的你的人脸同数据库中保存的人脸进行比对,如果相似度超过一定的阈值之后,就认为比对成功。不过我这里说的只是简化版本的人脸识别,现在手机和门禁这些要复杂和安全的多,也不是简单平面上的人脸识别。

总结下来可以分为下面的步骤:

  1. 上传人脸到数据库
  2. 人脸检测
  3. 数据库比对并返回结果

这里我做了一个简答的示意图,可以帮助大家简单理解一下。

image-20220109232309780

代码实现

废话不多说,这里就是我们的代码实现,代码我已经上传到码云,大家直接下载就行,地址就在博客开头。

不会安装python环境的兄弟请看这里:如何在pycharm中配置anaconda的虚拟环境_dejahu的博客-CSDN博客_如何在pycharm中配置anaconda

创建虚拟环境

创建虚拟环境前请大家先下载博客开头的码云源码到本地。

本次我们需要使用到python3.7的虚拟环境,命令如下:

  1. conda create -n face python==3.7.3
  2. conda activate face

安装必要的库

  1. pip install -r requirements.txt

愉快地开始你的人脸识别吧!

执行下面的主文件即可

  1. python UI.py

或者在pycharm中按照下面的方式直接运行即可

image-20220110104320212

首先将你需要识别的人脸上传到数据库中

image-20220110102015569

通过第二个视频检测功能识别实时的人脸

image-20220110102134504

详细的代码如下:

  1. # -*- coding: utf-8 -*-"""
  2. -------------------------------------------------
  3. Project Name: yolov5-jungong
  4. File Name: window.py.py
  5. Author: chenming
  6. Create Date: 2021/11/8
  7. Description:图形化界面,可以检测摄像头、视频和图片文件
  8. -------------------------------------------------
  9. """# 应该在界面启动的时候就将模型加载出来,设置tmp的目录来放中间的处理结果import shutil
  10. import PyQt5.QtCore
  11. from PyQt5.QtGui import*from PyQt5.QtCore import*from PyQt5.QtWidgets import*import threading
  12. import argparse
  13. import os
  14. import sys
  15. from pathlib import Path
  16. import cv2
  17. import torch
  18. import torch.backends.cudnn as cudnn
  19. import os.path as osp
  20. FILE = Path(__file__).resolve()
  21. ROOT = FILE.parents[0]# YOLOv5 root directoryifstr(ROOT)notin sys.path:
  22. sys.path.append(str(ROOT))# add ROOT to PATH
  23. ROOT = Path(os.path.relpath(ROOT, Path.cwd()))# relativefrom models.common import DetectMultiBackend
  24. from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
  25. from utils.general import(LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,
  26. increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)from utils.plots import Annotator, colors, save_one_box
  27. from utils.torch_utils import select_device, time_sync
  28. # 添加一个关于界面# 窗口主类classMainWindow(QTabWidget):# 基本配置不动,然后只动第三个界面def__init__(self):# 初始化界面super().__init__()
  29. self.setWindowTitle('Target detection system')
  30. self.resize(1200,800)
  31. self.setWindowIcon(QIcon("images/UI/lufei.png"))# 图片读取进程
  32. self.output_size =480
  33. self.img2predict =""
  34. self.device ='cpu'# # 初始化视频读取线程
  35. self.vid_source ='0'# 初始设置为摄像头
  36. self.stopEvent = threading.Event()
  37. self.webcam =True
  38. self.stopEvent.clear()
  39. self.model = self.model_load(weights="runs/train/exp_yolov5s/weights/best.pt",
  40. device="cpu")# todo 指明模型加载的位置的设备
  41. self.initUI()
  42. self.reset_vid()'''
  43. ***模型初始化***
  44. '''@torch.no_grad()defmodel_load(self, weights="",# model.pt path(s)
  45. device='',# cuda device, i.e. 0 or 0,1,2,3 or cpu
  46. half=False,# use FP16 half-precision inference
  47. dnn=False,# use OpenCV DNN for ONNX inference):
  48. device = select_device(device)
  49. half &= device.type!='cpu'# half precision only supported on CUDA
  50. device = select_device(device)
  51. model = DetectMultiBackend(weights, device=device, dnn=dnn)
  52. stride, names, pt, jit, onnx = model.stride, model.names, model.pt, model.jit, model.onnx
  53. # Half
  54. half &= pt and device.type!='cpu'# half precision only supported by PyTorch on CUDAif pt:
  55. model.model.half()if half else model.model.float()print("模型加载完成!")return model
  56. '''
  57. ***界面初始化***
  58. '''definitUI(self):# 图片检测子界面
  59. font_title = QFont('楷体',16)
  60. font_main = QFont('楷体',14)# 图片识别界面, 两个按钮,上传图片和显示结果
  61. img_detection_widget = QWidget()
  62. img_detection_layout = QVBoxLayout()
  63. img_detection_title = QLabel("图片识别功能")
  64. img_detection_title.setFont(font_title)
  65. mid_img_widget = QWidget()
  66. mid_img_layout = QHBoxLayout()
  67. self.left_img = QLabel()
  68. self.right_img = QLabel()
  69. self.left_img.setPixmap(QPixmap("images/UI/up.jpeg"))
  70. self.right_img.setPixmap(QPixmap("images/UI/right.jpeg"))
  71. self.left_img.setAlignment(Qt.AlignCenter)
  72. self.right_img.setAlignment(Qt.AlignCenter)
  73. mid_img_layout.addWidget(self.left_img)
  74. mid_img_layout.addStretch(0)
  75. mid_img_layout.addWidget(self.right_img)
  76. mid_img_widget.setLayout(mid_img_layout)
  77. up_img_button = QPushButton("上传图片")
  78. det_img_button = QPushButton("开始检测")
  79. up_img_button.clicked.connect(self.upload_img)
  80. det_img_button.clicked.connect(self.detect_img)
  81. up_img_button.setFont(font_main)
  82. det_img_button.setFont(font_main)
  83. up_img_button.setStyleSheet("QPushButton{color:white}""QPushButton:hover{background-color: rgb(2,110,180);}""QPushButton{background-color:rgb(48,124,208)}""QPushButton{border:2px}""QPushButton{border-radius:5px}""QPushButton{padding:5px 5px}""QPushButton{margin:5px 5px}")
  84. det_img_button.setStyleSheet("QPushButton{color:white}""QPushButton:hover{background-color: rgb(2,110,180);}""QPushButton{background-color:rgb(48,124,208)}""QPushButton{border:2px}""QPushButton{border-radius:5px}""QPushButton{padding:5px 5px}""QPushButton{margin:5px 5px}")
  85. img_detection_layout.addWidget(img_detection_title, alignment=Qt.AlignCenter)
  86. img_detection_layout.addWidget(mid_img_widget, alignment=Qt.AlignCenter)
  87. img_detection_layout.addWidget(up_img_button)
  88. img_detection_layout.addWidget(det_img_button)
  89. img_detection_widget.setLayout(img_detection_layout)# todo 视频识别界面# 视频识别界面的逻辑比较简单,基本就从上到下的逻辑
  90. vid_detection_widget = QWidget()
  91. vid_detection_layout = QVBoxLayout()
  92. vid_title = QLabel("视频检测功能")
  93. vid_title.setFont(font_title)
  94. self.vid_img = QLabel()
  95. self.vid_img.setPixmap(QPixmap("images/UI/up.jpeg"))
  96. vid_title.setAlignment(Qt.AlignCenter)
  97. self.vid_img.setAlignment(Qt.AlignCenter)
  98. self.webcam_detection_btn = QPushButton("摄像头实时监测")
  99. self.mp4_detection_btn = QPushButton("视频文件检测")
  100. self.vid_stop_btn = QPushButton("停止检测")
  101. self.webcam_detection_btn.setFont(font_main)
  102. self.mp4_detection_btn.setFont(font_main)
  103. self.vid_stop_btn.setFont(font_main)
  104. self.webcam_detection_btn.setStyleSheet("QPushButton{color:white}""QPushButton:hover{background-color: rgb(2,110,180);}""QPushButton{background-color:rgb(48,124,208)}""QPushButton{border:2px}""QPushButton{border-radius:5px}""QPushButton{padding:5px 5px}""QPushButton{margin:5px 5px}")
  105. self.mp4_detection_btn.setStyleSheet("QPushButton{color:white}""QPushButton:hover{background-color: rgb(2,110,180);}""QPushButton{background-color:rgb(48,124,208)}""QPushButton{border:2px}""QPushButton{border-radius:5px}""QPushButton{padding:5px 5px}""QPushButton{margin:5px 5px}")
  106. self.vid_stop_btn.setStyleSheet("QPushButton{color:white}""QPushButton:hover{background-color: rgb(2,110,180);}""QPushButton{background-color:rgb(48,124,208)}""QPushButton{border:2px}""QPushButton{border-radius:5px}""QPushButton{padding:5px 5px}""QPushButton{margin:5px 5px}")
  107. self.webcam_detection_btn.clicked.connect(self.open_cam)
  108. self.mp4_detection_btn.clicked.connect(self.open_mp4)
  109. self.vid_stop_btn.clicked.connect(self.close_vid)# 添加组件到布局上
  110. vid_detection_layout.addWidget(vid_title)
  111. vid_detection_layout.addWidget(self.vid_img)
  112. vid_detection_layout.addWidget(self.webcam_detection_btn)
  113. vid_detection_layout.addWidget(self.mp4_detection_btn)
  114. vid_detection_layout.addWidget(self.vid_stop_btn)
  115. vid_detection_widget.setLayout(vid_detection_layout)# todo 关于界面
  116. about_widget = QWidget()
  117. about_layout = QVBoxLayout()
  118. about_title = QLabel('欢迎使用目标检测系统\n\n 提供付费指导:有需要的好兄弟加下面的QQ即可')# todo 修改欢迎词语
  119. about_title.setFont(QFont('楷体',18))
  120. about_title.setAlignment(Qt.AlignCenter)
  121. about_img = QLabel()
  122. about_img.setPixmap(QPixmap('images/UI/qq.png'))
  123. about_img.setAlignment(Qt.AlignCenter)# label4.setText("<a href='https://oi.wiki/wiki/学习率的调整'>如何调整学习率</a>")
  124. label_super = QLabel()# todo 更换作者信息
  125. label_super.setText("<a href='https://blog.csdn.net/ECHOSON'>或者你可以在这里找到我-->肆十二</a>")
  126. label_super.setFont(QFont('楷体',16))
  127. label_super.setOpenExternalLinks(True)# label_super.setOpenExternalLinks(True)
  128. label_super.setAlignment(Qt.AlignRight)
  129. about_layout.addWidget(about_title)
  130. about_layout.addStretch()
  131. about_layout.addWidget(about_img)
  132. about_layout.addStretch()
  133. about_layout.addWidget(label_super)
  134. about_widget.setLayout(about_layout)
  135. self.left_img.setAlignment(Qt.AlignCenter)
  136. self.addTab(img_detection_widget,'图片检测')
  137. self.addTab(vid_detection_widget,'视频检测')
  138. self.addTab(about_widget,'联系我')
  139. self.setTabIcon(0, QIcon('images/UI/lufei.png'))
  140. self.setTabIcon(1, QIcon('images/UI/lufei.png'))
  141. self.setTabIcon(2, QIcon('images/UI/lufei.png'))'''
  142. ***上传图片***
  143. '''defupload_img(self):# 选择录像文件进行读取
  144. fileName, fileType = QFileDialog.getOpenFileName(self,'Choose file','','*.jpg *.png *.tif *.jpeg')if fileName:
  145. suffix = fileName.split(".")[-1]
  146. save_path = osp.join("images/tmp","tmp_upload."+ suffix)
  147. shutil.copy(fileName, save_path)# 应该调整一下图片的大小,然后统一防在一起
  148. im0 = cv2.imread(save_path)
  149. resize_scale = self.output_size / im0.shape[0]
  150. im0 = cv2.resize(im0,(0,0), fx=resize_scale, fy=resize_scale)
  151. cv2.imwrite("images/tmp/upload_show_result.jpg", im0)# self.right_img.setPixmap(QPixmap("images/tmp/single_result.jpg"))
  152. self.img2predict = fileName
  153. self.left_img.setPixmap(QPixmap("images/tmp/upload_show_result.jpg"))# todo 上传图片之后右侧的图片重置,
  154. self.right_img.setPixmap(QPixmap("images/UI/right.jpeg"))'''
  155. ***检测图片***
  156. '''defdetect_img(self):
  157. model = self.model
  158. output_size = self.output_size
  159. source = self.img2predict # file/dir/URL/glob, 0 for webcam
  160. imgsz =640# inference size (pixels)
  161. conf_thres =0.25# confidence threshold
  162. iou_thres =0.45# NMS IOU threshold
  163. max_det =1000# maximum detections per image
  164. device = self.device # cuda device, i.e. 0 or 0,1,2,3 or cpu
  165. view_img =False# show results
  166. save_txt =False# save results to *.txt
  167. save_conf =False# save confidences in --save-txt labels
  168. save_crop =False# save cropped prediction boxes
  169. nosave =False# do not save images/videos
  170. classes =None# filter by class: --class 0, or --class 0 2 3
  171. agnostic_nms =False# class-agnostic NMS
  172. augment =False# ugmented inference
  173. visualize =False# visualize features
  174. line_thickness =3# bounding box thickness (pixels)
  175. hide_labels =False# hide labels
  176. hide_conf =False# hide confidences
  177. half =False# use FP16 half-precision inference
  178. dnn =False# use OpenCV DNN for ONNX inferenceprint(source)if source =="":
  179. QMessageBox.warning(self,"请上传","请先上传图片再进行检测")else:
  180. source =str(source)
  181. device = select_device(self.device)
  182. webcam =False
  183. stride, names, pt, jit, onnx = model.stride, model.names, model.pt, model.jit, model.onnx
  184. imgsz = check_img_size(imgsz, s=stride)# check image size
  185. save_img =not nosave andnot source.endswith('.txt')# save inference images# Dataloaderif webcam:
  186. view_img = check_imshow()
  187. cudnn.benchmark =True# set True to speed up constant image size inference
  188. dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt andnot jit)
  189. bs =len(dataset)# batch_sizeelse:
  190. dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt andnot jit)
  191. bs =1# batch_size
  192. vid_path, vid_writer =[None]* bs,[None]* bs
  193. # Run inferenceif pt and device.type!='cpu':
  194. model(torch.zeros(1,3,*imgsz).to(device).type_as(next(model.model.parameters())))# warmup
  195. dt, seen =[0.0,0.0,0.0],0for path, im, im0s, vid_cap, s in dataset:
  196. t1 = time_sync()
  197. im = torch.from_numpy(im).to(device)
  198. im = im.half()if half else im.float()# uint8 to fp16/32
  199. im /=255# 0 - 255 to 0.0 - 1.0iflen(im.shape)==3:
  200. im = im[None]# expand for batch dim
  201. t2 = time_sync()
  202. dt[0]+= t2 - t1
  203. # Inference# visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
  204. pred = model(im, augment=augment, visualize=visualize)
  205. t3 = time_sync()
  206. dt[1]+= t3 - t2
  207. # NMS
  208. pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  209. dt[2]+= time_sync()- t3
  210. # Second-stage classifier (optional)# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)# Process predictionsfor i, det inenumerate(pred):# per image
  211. seen +=1if webcam:# batch_size >= 1
  212. p, im0, frame = path[i], im0s[i].copy(), dataset.count
  213. s +=f'{i}: 'else:
  214. p, im0, frame = path, im0s.copy(),getattr(dataset,'frame',0)
  215. p = Path(p)# to Path
  216. s +='%gx%g '% im.shape[2:]# print string
  217. gn = torch.tensor(im0.shape)[[1,0,1,0]]# normalization gain whwh
  218. imc = im0.copy()if save_crop else im0 # for save_crop
  219. annotator = Annotator(im0, line_width=line_thickness, example=str(names))iflen(det):# Rescale boxes from img_size to im0 size
  220. det[:,:4]= scale_coords(im.shape[2:], det[:,:4], im0.shape).round()# Print resultsfor c in det[:,-1].unique():
  221. n =(det[:,-1]== c).sum()# detections per class
  222. s +=f"{n}{names[int(c)]}{'s'*(n >1)}, "# add to string# Write resultsfor*xyxy, conf, cls inreversed(det):if save_txt:# Write to file
  223. xywh =(xyxy2xywh(torch.tensor(xyxy).view(1,4))/ gn).view(-1).tolist()# normalized xywh
  224. line =(cls,*xywh, conf)if save_conf else(cls,*xywh)# label format# with open(txt_path + '.txt', 'a') as f:# f.write(('%g ' * len(line)).rstrip() % line + '\n')if save_img or save_crop or view_img:# Add bbox to image
  225. c =int(cls)# integer class
  226. label =Noneif hide_labels else(names[c]if hide_conf elsef'{names[c]}{conf:.2f}')
  227. annotator.box_label(xyxy, label, color=colors(c,True))# if save_crop:# save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg',# BGR=True)# Print time (inference-only)
  228. LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')# Stream results
  229. im0 = annotator.result()# if view_img:# cv2.imshow(str(p), im0)# cv2.waitKey(1) # 1 millisecond# Save results (image with detections)
  230. resize_scale = output_size / im0.shape[0]
  231. im0 = cv2.resize(im0,(0,0), fx=resize_scale, fy=resize_scale)
  232. cv2.imwrite("images/tmp/single_result.jpg", im0)# 目前的情况来看,应该只是ubuntu下会出问题,但是在windows下是完整的,所以继续
  233. self.right_img.setPixmap(QPixmap("images/tmp/single_result.jpg"))# 视频检测,逻辑基本一致,有两个功能,分别是检测摄像头的功能和检测视频文件的功能,先做检测摄像头的功能。'''
  234. ### 界面关闭事件 ###
  235. '''defcloseEvent(self, event):
  236. reply = QMessageBox.question(self,'quit',"Are you sure?",
  237. QMessageBox.Yes | QMessageBox.No,
  238. QMessageBox.No)if reply == QMessageBox.Yes:
  239. self.close()
  240. event.accept()else:
  241. event.ignore()'''
  242. ### 视频关闭事件 ###
  243. '''defopen_cam(self):
  244. self.webcam_detection_btn.setEnabled(False)
  245. self.mp4_detection_btn.setEnabled(False)
  246. self.vid_stop_btn.setEnabled(True)
  247. self.vid_source ='0'
  248. self.webcam =True
  249. th = threading.Thread(target=self.detect_vid)
  250. th.start()'''
  251. ### 开启视频文件检测事件 ###
  252. '''defopen_mp4(self):
  253. fileName, fileType = QFileDialog.getOpenFileName(self,'Choose file','','*.mp4 *.avi')if fileName:
  254. self.webcam_detection_btn.setEnabled(False)
  255. self.mp4_detection_btn.setEnabled(False)# self.vid_stop_btn.setEnabled(True)
  256. self.vid_source = fileName
  257. self.webcam =False
  258. th = threading.Thread(target=self.detect_vid)
  259. th.start()'''
  260. ### 视频开启事件 ###
  261. '''# 视频和摄像头的主函数是一样的,不过是传入的source不同罢了defdetect_vid(self):# pass
  262. model = self.model
  263. output_size = self.output_size
  264. # source = self.img2predict # file/dir/URL/glob, 0 for webcam
  265. imgsz =640# inference size (pixels)
  266. conf_thres =0.25# confidence threshold
  267. iou_thres =0.45# NMS IOU threshold
  268. max_det =1000# maximum detections per image# device = self.device # cuda device, i.e. 0 or 0,1,2,3 or cpu
  269. view_img =False# show results
  270. save_txt =False# save results to *.txt
  271. save_conf =False# save confidences in --save-txt labels
  272. save_crop =False# save cropped prediction boxes
  273. nosave =False# do not save images/videos
  274. classes =None# filter by class: --class 0, or --class 0 2 3
  275. agnostic_nms =False# class-agnostic NMS
  276. augment =False# ugmented inference
  277. visualize =False# visualize features
  278. line_thickness =3# bounding box thickness (pixels)
  279. hide_labels =False# hide labels
  280. hide_conf =False# hide confidences
  281. half =False# use FP16 half-precision inference
  282. dnn =False# use OpenCV DNN for ONNX inference
  283. source =str(self.vid_source)
  284. webcam = self.webcam
  285. device = select_device(self.device)
  286. stride, names, pt, jit, onnx = model.stride, model.names, model.pt, model.jit, model.onnx
  287. imgsz = check_img_size(imgsz, s=stride)# check image size
  288. save_img =not nosave andnot source.endswith('.txt')# save inference images# Dataloaderif webcam:
  289. view_img = check_imshow()
  290. cudnn.benchmark =True# set True to speed up constant image size inference
  291. dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt andnot jit)
  292. bs =len(dataset)# batch_sizeelse:
  293. dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt andnot jit)
  294. bs =1# batch_size
  295. vid_path, vid_writer =[None]* bs,[None]* bs
  296. # Run inferenceif pt and device.type!='cpu':
  297. model(torch.zeros(1,3,*imgsz).to(device).type_as(next(model.model.parameters())))# warmup
  298. dt, seen =[0.0,0.0,0.0],0for path, im, im0s, vid_cap, s in dataset:
  299. t1 = time_sync()
  300. im = torch.from_numpy(im).to(device)
  301. im = im.half()if half else im.float()# uint8 to fp16/32
  302. im /=255# 0 - 255 to 0.0 - 1.0iflen(im.shape)==3:
  303. im = im[None]# expand for batch dim
  304. t2 = time_sync()
  305. dt[0]+= t2 - t1
  306. # Inference# visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
  307. pred = model(im, augment=augment, visualize=visualize)
  308. t3 = time_sync()
  309. dt[1]+= t3 - t2
  310. # NMS
  311. pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  312. dt[2]+= time_sync()- t3
  313. # Second-stage classifier (optional)# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)# Process predictionsfor i, det inenumerate(pred):# per image
  314. seen +=1if webcam:# batch_size >= 1
  315. p, im0, frame = path[i], im0s[i].copy(), dataset.count
  316. s +=f'{i}: 'else:
  317. p, im0, frame = path, im0s.copy(),getattr(dataset,'frame',0)
  318. p = Path(p)# to Path# save_path = str(save_dir / p.name) # im.jpg# txt_path = str(save_dir / 'labels' / p.stem) + (# '' if dataset.mode == 'image' else f'_{frame}') # im.txt
  319. s +='%gx%g '% im.shape[2:]# print string
  320. gn = torch.tensor(im0.shape)[[1,0,1,0]]# normalization gain whwh
  321. imc = im0.copy()if save_crop else im0 # for save_crop
  322. annotator = Annotator(im0, line_width=line_thickness, example=str(names))iflen(det):# Rescale boxes from img_size to im0 size
  323. det[:,:4]= scale_coords(im.shape[2:], det[:,:4], im0.shape).round()# Print resultsfor c in det[:,-1].unique():
  324. n =(det[:,-1]== c).sum()# detections per class
  325. s +=f"{n}{names[int(c)]}{'s'*(n >1)}, "# add to string# Write resultsfor*xyxy, conf, cls inreversed(det):if save_txt:# Write to file
  326. xywh =(xyxy2xywh(torch.tensor(xyxy).view(1,4))/ gn).view(-1).tolist()# normalized xywh
  327. line =(cls,*xywh, conf)if save_conf else(cls,*xywh)# label format# with open(txt_path + '.txt', 'a') as f:# f.write(('%g ' * len(line)).rstrip() % line + '\n')if save_img or save_crop or view_img:# Add bbox to image
  328. c =int(cls)# integer class
  329. label =Noneif hide_labels else(names[c]if hide_conf elsef'{names[c]}{conf:.2f}')
  330. annotator.box_label(xyxy, label, color=colors(c,True))# if save_crop:# save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg',# BGR=True)# Print time (inference-only)
  331. LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')# Stream results# Save results (image with detections)
  332. im0 = annotator.result()
  333. frame = im0
  334. resize_scale = output_size / frame.shape[0]
  335. frame_resized = cv2.resize(frame,(0,0), fx=resize_scale, fy=resize_scale)
  336. cv2.imwrite("images/tmp/single_result_vid.jpg", frame_resized)
  337. self.vid_img.setPixmap(QPixmap("images/tmp/single_result_vid.jpg"))# self.vid_img# if view_img:# cv2.imshow(str(p), im0)# self.vid_img.setPixmap(QPixmap("images/tmp/single_result_vid.jpg"))# cv2.waitKey(1) # 1 millisecondif cv2.waitKey(25)& self.stopEvent.is_set()==True:
  338. self.stopEvent.clear()
  339. self.webcam_detection_btn.setEnabled(True)
  340. self.mp4_detection_btn.setEnabled(True)
  341. self.reset_vid()break# self.reset_vid()'''
  342. ### 界面重置事件 ###
  343. '''defreset_vid(self):
  344. self.webcam_detection_btn.setEnabled(True)
  345. self.mp4_detection_btn.setEnabled(True)
  346. self.vid_img.setPixmap(QPixmap("images/UI/up.jpeg"))
  347. self.vid_source ='0'
  348. self.webcam =True'''
  349. ### 视频重置事件 ###
  350. '''defclose_vid(self):
  351. self.stopEvent.set()
  352. self.reset_vid()if __name__ =="__main__":
  353. app = QApplication(sys.argv)
  354. mainWindow = MainWindow()
  355. mainWindow.show()
  356. sys.exit(app.exec_())

找到我

你可以通过这些方式来寻找我。

B站:肆十二-

CSDN:肆十二

知乎:肆十二

微博:肆十二-

现在关注以后就是老朋友喽!

image-20211212195912911


本文转载自: https://blog.csdn.net/ECHOSON/article/details/122404926
版权归原作者 肆十二 所有, 如有侵权,请联系我们删除。

“教你用300行Python代码实现一个人脸识别系统”的评论:

还没有评论