问题描述
与视频的第一帧相比,我正在尝试为视频的每一帧的特定裁剪部分生成曼哈顿距离。这些是高帧率视频,每个视频约有5000+帧。目前,每个视频大约需要120秒才能执行此分析,并生成帧列表及其关联的“曼哈顿距离”。我有几件事可以优化速度-使用内置的scipy.spacial.distance
cdist函数,使用np.linalg.norm
尝试欧几里得距离,并在循环外将视频作为视频进行灰度转换,处理步骤。这些更改均未对计算时间造成重大影响。有什么方法可以大大加快此过程(下面的函数中的while循环)?
def compare_images_master():
current_frame = 0
numFrames = count_frames_automatic(resized_videocap)
GrayOriginalFrame = cv2.cvtColor(DetectionPlate,cv2.COLOR_BGR2GRAY)
originalFrameMap = GrayOriginalFrame.astype(float)
m_norm_list = [0] * numFrames
frame_num_list = list(range(numFrames))
start_compare_images = time.time()
print("calculating Manhattan distances...")
for current_frame in frame_num_list:
resized_videocap.set(1,current_frame) #set current frame to the frame to be computed
ret,currentFrameImage = resized_videocap.read() #read video file and open current
currentFrameImageCropped = currentFrameImage[ref_pts[0][1]:ref_pts[1][1],ref_pts[0][0]: ref_pts[1][0]] #crop image to detection plate
GrayCurrentFrame = cv2.cvtColor(currentFrameImageCropped,cv2.COLOR_BGR2GRAY) #convert to grayscale
currentFrameMap = GrayCurrentFrame.astype(float) #convert from image to pixel frame map
diff = originalFrameMap - currentFrameMap
m_norm = np.sum(np.abs(diff)) # Manhattan norm
m_norm_list[current_frame] = m_norm
end_compare_images = time.time()
print("Completed calculating Manhattan distances...")
print ('time taken to generate manhattan distances',end_compare_images - start_compare_images)
# time taken - 120seconds
return m_norm_list,frame_num_list
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)