FFMPEG:在组合多个过滤器时指定输出流类型

问题描述

我目前有 3 个单独的 ffmpeg 命令来执行以下操作:

  1. 在视频上叠加水印:ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -filter_complex "overlay=(W-w)/2:H-h" -af "adelay=700" output.mp4
  2. 将 1) 的结果叠加到海滩视频上:ffmpeg -i backgrounds/beachsunsetmp4.mp4 -i output.mp4 -filter_complex "[1:v]chromakey=0x005d0b:0.1485:0.03[ckout];[0:v][ckout]overlay[o]" -map [o] -map 1:a -shortest somefolder/sample_video.mp4
  3. 将 2) 结果的音频与另一个音频文件合并:ffmpeg -i somefolder/sample_video.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex '[0:a][1:a]amerge=inputs=2[a]' -map 0:v -map '[a]' -c:v copy -ac 2 -shortest anotherfolder/sample_video.mp4

现在,这一切都按预期工作,但是,我正在考虑尝试将它们全部组合成一个命令,组合所有过滤器,如下所示:

ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -i backgrounds/beachsunsetmp4.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex \
    "[0]overlay=(W-w)/2:H-h[output_1]; \
     [output_1]chromakey=0x005d0b:0.1485:0.03[ckout]; \
     [2:v][ckout]overlay[output_2]; \
     [output_2][3:a] amerge=inputs=2 [output_3]" \
    -af "adelay=700" -map [output_3] shortest final.mp4

它失败并出现以下错误 (Media type mismatch between the 'Parsed_overlay_2' filter output pad 0 (video) and the 'Parsed_amerge_3' filter input pad 0 (audio)):

ffmpeg version 4.3.2 copyright (c) 2000-2021 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolBox
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0,matroska,webm,from 'samplegreen.webm':
  Metadata:
    encoder         : Chrome
  Duration: N/A,start: 0.000000,bitrate: N/A
    Stream #0:0(eng): Video: vp8,yuv420p(progressive),1280x720,SAR 1:1 DAR 16:9,1k tbr,1k tbn,1k tbc (default)
    Metadata:
      alpha_mode      : 1
    Stream #0:1(eng): Audio: opus,48000 Hz,mono,fltp (default)
Input #1,png_pipe,from 'foregrounds/myimage.png':
  Duration: N/A,bitrate: N/A
    Stream #1:0: Video: png,rgba(pc),350x86,25 tbr,25 tbn,25 tbc
Input #2,mov,mp4,m4a,3gp,3g2,mj2,from 'backgrounds/beachsunsetmp4.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2021-02-16T18:24:40.000000Z
  Duration: 00:00:32.53,bitrate: 3032 kb/s
    Stream #2:0(eng): Video: h264 (Main) (avc1 / 0x31637661),yuv420p(tv,bt709),3027 kb/s,29.97 fps,29.97 tbr,30k tbn,59.94 tbc (default)
    Metadata:
      creation_time   : 2021-02-16T18:24:40.000000Z
      handler_name    : ?Mainconcept Video Media Handler
      encoder         : AVC Coding
[mp3 @ 0x7f86cf809000] Estimating duration from bitrate,this may be inaccurate
Input #3,mp3,from 'backgrounds/beachsunsetmp4.mp3':
  Metadata:
    date            : 2021-02-18 06:49
    id3v2_priv.XMP  : <?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a<x:xmpMeta xmlns:x="adobe:ns:Meta/" x:xmptk="Adobe XMP Core 6.0-c003 79.164527,2020/10/15-17:48:32        ">\x0a <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-Syntax-ns#">\x0a  <rdf
  Duration: 00:00:32.60,bitrate: 132 kb/s
    Stream #3:0: Audio: mp3,stereo,fltp,128 kb/s
[Parsed_overlay_2 @ 0x7f86cd4039c0] Media type mismatch between the 'Parsed_overlay_2' filter output pad 0 (video) and the 'Parsed_amerge_3' filter input pad 0 (audio)
[AVFiltergraph @ 0x7f86cd402a40] Cannot create the link overlay:0 -> amerge:0
Error initializing complex filters.
Invalid argument

据我所知,问题在于过滤器 amerge 需要 2 个音频流。通常,我可以采用输入流参数(这是一个视频),并通过执行诸如 [0:a][1:a]amerge=inputs=2[results] 之类的操作使其使用音频。但是,由于我的输入流是前一个过滤器的输出,这似乎不起作用(即 [output_2:a])。它爆炸了:

[matroska,webm @ 0x7fecca000000] Invalid stream specifier: output_2:a.
    Last message repeated 1 times
Stream specifier 'output_2:a' in filtergraph description [0]overlay=(W-w)/2:H-h[output_1];      [output_1]chromakey=0x005d0b:0.1485:0.03[ckout];      [2:v][ckout]overlay[output_2];      [output_2:a][3:a] amerge=inputs=2 [output_3] matches no streams.

所有这些都说...有没有办法指定我想使用来自前一个过滤器输出的音频流?或者有什么其他方法可以将所有这些过滤器组合成一个命令?

谢谢。

任何帮助将不胜感激!

解决方法

除了 concat 等少数过滤器外,过滤器将仅接收视频输入或音频。


这是组合命令。

ffmpeg \
 -i samplegreen.webm \
 -i foregrounds/myimage.png \
 -i backgrounds/beachsunsetmp4.mp4 \
 -i backgrounds/beachsunsetmp4.mp3 \
 -filter_complex \
 "[0][1]overlay=(W-w)/2:H-h,chromakey=0x005d0b:0.1485:0.03[ckout]; \
  [2][ckout]overlay=shortest=1[v]; \
  [0]adelay=700:all=1[0a]; \
  [0a][3]amerge=inputs=2[a]" \
 -map '[v]' -map '[a]' \
 -shortest -r 30 -ac 2 \
output.mp4