扩展区域凸包

问题描述

使用Dlibs面部索引,我发现了我脸上的每个眼睛索引点。 在对它们两个都做了凸包之后,我使用cv2.fillConvexpoly在视频帧中将它们屏蔽掉了。 我想知道是否可以扩展这些凸包的区域,以便可以看到更多的图像,而不是仅眼睛的内部(如图2所示)。基本上,我想扩展遮罩的眼部轮廓。任何帮助将不胜感激!

\\

# import the necessary packages
from scipy.spatial import distance as dist
from scipy.spatial import ConvexHull,convex_hull_plot_2d
from imutils.video import VideoStream
from imutils import face_utils
import numpy as np
import argparse
import imutils
import time
import dlib
import cv2

 
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-p","--shape-predictor",required=True,help="path to facial landmark predictor")
ap.add_argument("-v","--video",type=str,default="",help="path to input video file")
args = vars(ap.parse_args())
 [enter image description here][1]

# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])

# grab the indexes of the facial landmarks for the left and
# right eye,respectively
(lStart,lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart,rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]


# start the video stream thread
print("[INFO] starting video stream thread...")

#fileStream = True
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
fileStream = False
time.sleep(1.0)

# loop over frames from the video stream
while True:
    # if this is a file video stream,then we need to check if
    # there any more frames left in the buffer to process

    # grab the frame from the threaded video file stream,resize
    # it,and convert it to grayscale
    # channels)
    frame = vs.read()
    frame = imutils.resize(frame,width=800,height=500 )
    gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
    mask = np.zeros_like(gray)

    # detect faces in the grayscale frame
    rects = detector(gray,0)

    # loop over the face detections
    for rect in rects:
        # determine the facial landmarks for the face region,then
        # convert the facial landmark (x,y)-coordinates to a NumPy
        # array
        shape = predictor(gray,rect)
        shape = face_utils.shape_to_np(shape)

        # extract the left and right eye coordinates,then use the
        # coordinates to compute the eye aspect ratio for both eyes
        leftEye = shape[lStart:lEnd]
        rightEye = shape[rStart:rEnd]
        leftEAR = eye_aspect_ratio(leftEye)
        

        # compute the convex hull for the left and right eye,then
        # visualize each of the eyes
        leftEyeHull = cv2.convexHull(leftEye)
        rightEyeHull = cv2.convexHull(rightEye)
        
        cv2.fillConvexpoly(mask,leftEyeHull,255,)
        cv2.fillConvexpoly(mask,rightEyeHull,)

        eyes = cv2.bitwise_and(frame,frame,mask=mask)

# show the frame
    cv2.imshow("eyes",eyes)
    cv2.imshow("maks",mask)
    key = cv2.waitKey(1) & 0xFF
 
    # if the `q` key was pressed,break from the loop
    if key == ord("q"):
        break

# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()

\
mask of the inner eye part

inner part of the eyes

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)