有什么方法可以改进我们的模型以使用 OpenCV 文本检测EAST 文本检测器检测图片中的路牌?

问题描述

在我们的项目中,我们试图仅使用 Python 中的 EAST 文本检测器来检测路牌。我们在一些小的更改后设法使一些代码工作,但我们的模型并不那么准确。在大多数情况下,它给出的误报比实际的真阳性更多。它还检测某些图像中的其他文本,但我们仍然必须训练模型仅预测路牌。荷兰这里的路牌通常是蓝底白字。

One of the sample pictures:

在某些图片中,模型甚至可以预测照片中的某些部分甚至不是文字,例如树、阳台等的部分。

What we would like to achieve:

我的问题:有没有办法改进这个模型,让模型只预测路牌?

链接到 EAST 文本检测 Github:https://github.com/dilhelh/opencv-text-detection/blob/master/text_detection.py

# import the necessary packages
from imutils.object_detection import non_max_suppression
import numpy as np
import pandas as pd
import argparse
import time
import cv2

timing = []

for m in range(5704):
    # load the input image and grab the image dimensions
    image = cv2.imread('/ext/PM-track-data/PM_BAM-data/Breda-ring_22-04-2020_13-57-38/360/' + str(m) + '.jpeg')
    orig = image.copy()
    (H,W) = image.shape[:2]

# set the new width and height and then determine the ratio in change
# for both the width and height
(newW,newH) = (4096,2048)
rW = W / float(newW)
rH = H / float(newH)

# resize the image and grab the new image dimensions
(newW,newH) = (newW,newH)
(H,W) = image.shape[:2]

# define the two output layer names for the EAST detector model that
# we are interested -- the first is the output probabilities and the
# second can be used to derive the bounding box coordinates of text
layerNames = [
    "feature_fusion/Conv_7/Sigmoid","feature_fusion/concat_3"]

# load the pre-trained EAST text detector
net = cv2.dnn.readNet('frozen_east_text_detection.pb')

# construct a blob from the image and then perform a forward pass of
# the model to obtain the two output layer sets
blob = cv2.dnn.blobFromImage(image,1.0,(W,H),(123.68,116.78,103.94),swapRB=True,crop=False)
start = time.time()
net.setInput(blob)
(scores,geometry) = net.forward(layerNames)
end = time.time()

# show timing information on text prediction
tijd = end - start

timing.append(tijd)

# grab the number of rows and columns from the scores volume,then
# initialize our set of bounding box rectangles and corresponding
# confidence scores
(numRows,numCols) = scores.shape[2:4]
rects = []
confidences = []

# loop over the number of rows
for y in range(0,numRows):
    # extract the scores (probabilities),followed by the geometrical
    # data used to derive potential bounding box coordinates that
    # surround text
    scoresData = scores[0,y]
    xData0 = geometry[0,y]
    xData1 = geometry[0,1,y]
    xData2 = geometry[0,2,y]
    xData3 = geometry[0,3,y]
    anglesData = geometry[0,4,y]

    # loop over the number of columns
    for x in range(0,numCols):
        # if our score does not have sufficient probability,ignore it
        #if scoresData[x] < args["min_confidence"]:
        if scoresData[x] < 0.3:
            continue

        # compute the offset factor as our resulting feature maps will
        # be 4x smaller than the input image
        (offsetX,offsetY) = (x * 4.0,y * 4.0)

        # extract the rotation angle for the prediction and then
        # compute the sin and cosine
        angle = anglesData[x]
        cos = np.cos(angle)
        sin = np.sin(angle)

        # use the geometry volume to derive the width and height of
        # the bounding box
        h = xData0[x] + xData2[x]
        w = xData1[x] + xData3[x]

        # compute both the starting and ending (x,y)-coordinates for
        # the text prediction bounding box
        endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x]))
        endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x]))
        startX = int(endX - w)
        startY = int(endY - h)

        # add the bounding box coordinates and probability score to
        # our respective lists
        rects.append((startX,startY,endX,endY))
        confidences.append(scoresData[x])

# apply non-maxima suppression to suppress weak,overlapping bounding
# boxes
boxes = non_max_suppression(np.array(rects),probs=confidences)

# loop over the bounding boxes
for (startX,endY) in boxes:
    # scale the bounding box coordinates based on the respective
    # ratios
    startX = int(startX * rW)
    startY = int(startY * rH)
    endX = int(endX * rW)
    endY = int(endY * rH)

    # draw the bounding box on the image
    cv2.rectangle(orig,(startX,startY),(endX,endY),(0,255,0),2)
    
# show the output image
cv2.imwrite('file path' + str(m) + '.jpeg',orig)
print('Foto: ' + str(m) + ' is klaar')

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...