YoloV5 图像识别与跟踪 - 如何绘制一条连接前一个点和当前点的连续线,直到对象在帧内

问题描述

我正在尝试从视频输入中检测人体物体和球,我能够识别这两个物体并在识别出的物体周围绘制一个方框,但是如何在它们移动的轨迹中绘制一条连续线?我已经从 Yolov5 Github Repo 下载了 detect.py 文件自定义了要识别的对象。

我想画一条连接前一点和当前点的连续线,直到视频中的物体失焦?

我需要像这张图片一样在球轨迹上画一条线,

enter image description here

# Apply Classifier
if classify:
    pred = apply_classifier(pred,modelc,img,im0s)

# Process detections
for i,det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p,s,im0,frame = path[i],f'{i}: ',im0s[i].copy(),dataset.count
    else:
        p,frame = path,'',im0s.copy(),getattr(dataset,'frame',0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1,1,0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale Boxes from img_size to im0 size
        det[:,:4] = scale_coords(img.shape[2:],det[:,:4],im0.shape).round()

        # Print results
        for c in det[:,-1].unique():
            n = (det[:,-1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)},"  # add to string

        # Write results
        for *xyxy,conf,cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1,4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls,*xywh,conf) if opt.save_conf else (cls,*xywh)  # label format
                with open(txt_path + '.txt','a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bBox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_Box(xyxy,label=label,color=colors(c,True),line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_Box(xyxy,imc,file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg',BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p),im0)
        cv2.waitKey(1)  # 1 millisecond

解决方法

假设您只需要跟踪一个球。在所有帧中检测到球后,您需要做的就是从检测到的第一帧从球的中心开始画一条黄色透明线,到下一帧的球中心。线的宽度将是(例如)该帧中球宽度的 30%。只需保留对象中心和大小的列表。

现在,如果您有几个从不相交的球,您需要做的就是让每个球找到前几帧中距离较近的那个。

最后,如果几个球确实相交,找到它们的运动向量(从两个“球”对象变成一个或停止被识别为球然后再次分裂的那一刻起,对几帧来回进行回归),并根据其历史位置分配轨迹。

如果轨迹线过于紧张,请使用移动中位数平滑轨迹/宽度。

,

以下结构可能会有所帮助:

这是在每帧只有 1 个检测的情况下


# A list to store centroids of detected
cent_hist = []

def draw_trajectory(frame: numpy.ndarray,cent_hist: list = cent_hist,trajectory_length: int = 50) -> numpy.ndarray:
    if len(cent_hist)>trajectory_length:
        while len(cent_hist)!=trajectory_length:
            cent_hist.pop(0)
    for i in range(len(cent_hist)-1):
        frame = cv2.line(frame,cent_hist[i],cent_hist[i+1],(0,0))
    return frame

for i,det in enumerate(pred):  # detections per image
    if webcam:  # batch_size >= 1
        p,s,im0,frame = path[i],f'{i}: ',im0s[i].copy(),dataset.count
    else:
        p,frame = path,'',im0s.copy(),getattr(dataset,'frame',0)

    p = Path(p)  # to Path
    save_path = str(save_dir / p.name)  # img.jpg
    txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
    s += '%gx%g ' % img.shape[2:]  # print string
    gn = torch.tensor(im0.shape)[[1,1,0]]  # normalization gain whwh
    imc = im0.copy() if opt.save_crop else im0  # for opt.save_crop
    if len(det):
        # Rescale boxes from img_size to im0 size
        det[:,:4] = scale_coords(img.shape[2:],det[:,:4],im0.shape).round()

        # Print results
        for c in det[:,-1].unique():
            n = (det[:,-1] == c).sum()  # detections per class
            s += f"{n} {names[int(c)]}{'s' * (n > 1)},"  # add to string

        # Write results
        for *xyxy,conf,cls in reversed(det):
            if save_txt:  # Write to file
                xywh = (xyxy2xywh(torch.tensor(xyxy).view(1,4)) / gn).view(-1).tolist()  # normalized xywh
                line = (cls,*xywh,conf) if opt.save_conf else (cls,*xywh)  # label format
                with open(txt_path + '.txt','a') as f:
                    f.write(('%g ' * len(line)).rstrip() % line + '\n')

            if save_img or opt.save_crop or view_img:  # Add bbox to image
                c = int(cls)  # integer class
                label = None if opt.hide_labels else (names[c] if opt.hide_conf else f'{names[c]} {conf:.2f}')
                plot_one_box(xyxy,label=label,color=colors(c,True),line_thickness=opt.line_thickness)
                if opt.save_crop:
                    save_one_box(xyxy,imc,file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg',BGR=True)

    # Print time (inference + NMS)
    print(f'{s}Done. ({t2 - t1:.3f}s)')
    
    ### Calculate centroid here
    centroid = (50,50) # Change this

    cent_hist.append(centroid)
    im0 = draw_trajectory(im0,cent_hist,50)

    view_img=True
    # Stream results
    if view_img:
        cv2.imshow(str(p),im0)
        cv2.waitKey(1)  # 1 millisecond

如果您想将其用于多个检测,那么我建议使用一些对象跟踪算法,例如:link,这将帮助您更好地解决分配问题(当您有多个点时)。