[FFmpeg-user] Questions about the concat video filter
knfrances at gmail.com
Tue Jan 23 12:09:39 EET 2018
I've been investigating the concat filter recently, and after the
documentation at https://ffmpeg.org/ffmpeg-filters.html#concat and the wiki
page at https://trac.ffmpeg.org/wiki/Concatenate#differentcodec, I have a
couple of further questions.
The documentation says,
Related streams do not always have exactly the same duration, for various
> reasons including codec frame size or sloppy authoring. For that reason,
> related synchronized streams (e.g. a video and its audio track) should be
> concatenated at once.
It's the wording "at once" that is tripping me up. Does it mean 'together'?
i.e., all streams from a given input should be specified in order, as shown
in the examples, rather than grouping e.g. the video streams from each
input, then the audio streams from each input?
The concat filter will use the duration of the longest stream in each
> segment (except the last one), and if necessary pad shorter audio streams
> with silence.
Can I then assume that in the case in which the video stream is shorter, it
will be padded with black slug?
Further, if the specs for the output streams are not explicitly set by the
user, how does FFmpeg 'decide' what codec & options etc to use? The docs
the filtering system will automatically select a common pixel format for
> video streams, and a common sample format, sample rate and channel layout
> for audio streams, but other settings, such as resolution, must be
> converted explicitly by the user.
but I don't find the wording isn't very clear. From my tests, it seems to
default to something similar to the specs of the lower-quality input? I
wonder if someone can confirm, or give some more information about the
default output codec settings.
Disclaimer: these might be noob questions to some of you, but for me it's
not obvious, so I would appreciate clarification. Thank you.
More information about the ffmpeg-user