如何在带有管道录制功能的Python / Windows10上使用FFMPEG?

问题描述

我想用ffmpeg记录屏幕,因为它似乎是唯一可以记录连同鼠标光标的屏幕区域的播放器。

以下代码改编自i want to display mouse pointer in my recording,但不适用于Windows 10(x64)设置(使用Python 3.6)。

#!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100,100

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""

    #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

    cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe,-pix_fmt bgr24 -vcodec rawvideo -an -sn' 

    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE,stderr=subprocess.STDOUT,shell=True)
    #out,err = proc.communicate()
    while True:
        frame = proc.stdout.read(w*h*3)
        yield np.frombuffer(frame,dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    cv2.imshow('screenshot',frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()

通过如上所述使用'cmd',我将收到以下错误

b"ffmpeg version git-2020-08-31-4a11a6f copyright (c) 2000-2020 the FFmpeg developers\r\n  built with gcc 10.2.1 (GCC) 20200805\r\n  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n  libavutil      56. 58.100 / 56. 58.100\r\n  libavcodec     58.101.101 / 58.101.101\r\n  libavformat    58. 51.101 / 58. 51.101\r\n  libavdevice    58. 11.101 / 58. 11.101\r\n  libavfilter     7. 87.100 /  7. 87.100\r\n  libswscale      5.  8.100 /  5.  8.100\r\n  libswresample   3.  8.100 /  3.  8.100\r\n  libpostproc    55.  8.100 / 55.  8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0,gdigrab,from 'desktop':\r\n  Duration: N/A,start: 1599021857.538752,bitrate: 9612 kb/s\r\n    Stream #0:0: Video: bmp,bgra,100x100,9612 kb/s,30 fps,30 tbr,1000k tbn,1000k tbc\r\n**At least one output file must be specified**\r\n"

这是proc(以及proc.communicate)的内容。尝试将此消息调整为大小为100x100的图像后,程序立即崩溃。

我不想有一个输出文件。我需要将Python子进程与Pipe一起使用,以便将这些屏幕框架直接传递到我的Python代码中,而无需任何IO。

如果我尝试以下操作:

cmd ='D:/下载/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i桌面-pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'

proc = subprocess.Popen(cmd,stderr=subprocess.PIPE,shell=True)

然后在'while True'内的'frame'用b''填充。

尝试使用以下库没有成功,因为我无法找到如何捕获鼠标光标或完全捕获屏幕的方法https://github.com/abhiTronix/vidgearhttps://github.com/kkroening/ffmpeg-python

我想念什么? 谢谢。

解决方法

您缺少管道的-(或pipe:pipe:1),如:

ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe -

请参见FFmpeg pipe protocol documentation

,

@Trmotta IDK,我很惊讶得知您一开始无法使用vidgear,因为它是可用于视频处理的最简单的python框架。我可以使用vidgear APIs,以更简洁,更少行的方式实现您的代码,如下所示:

# import required libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2


# define dimensions of screen w.r.t to given monitor to be captured
options = {'top': 10,'left': 20,'width': 100,'height': 100}

# define suitable FFmpeg parameters(such as framerate) for writer
output_params = {"-input_framerate":30,}

# open video stream with defined parameters
stream = ScreenGear(monitor=1,logging=True,**options).start()

# Define writer with defined parameters and suitable output filename for e.g. `Output.mp4`
writer = WriteGear(output_filename = 'Output.mp4',logging = True,**output_params)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break


    # {do something with the frame here}

    # write gray frame to writer
    writer.write(frame)

    # Show output window
    cv2.imshow("Screenshot",frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

# safely close writer
writer.close()

相关文档在这里: https://abhitronix.github.io/vidgear/gears/screengear/overview/

VidGear文档: https://abhitronix.github.io/vidgear/gears