[FFmpeg-devel] [PATCH] Compute individual stream durations in matroska muxer. Write them as binary tags. Parse the binary tags in matroska demuxer, and write them to AVStream

Hendrik Leppkes h.leppkes at gmail.com
Wed Jul 29 23:18:17 CEST 2015


On Wed, Jul 29, 2015 at 11:14 PM, Jerome Martinez <jerome at mediaarea.net> wrote:
> Le 29/07/2015 22:41, Hendrik Leppkes a écrit :
>>
>> On Wed, Jul 29, 2015 at 8:15 PM, Sasi Inguva <isasi at google.com> wrote:
>>>
>>> @Reimar:
>>> True about the stream duration being wrong if stream timestamp does not
>>> start at 0 . I just duplicated the logic to compute the total duration.
>>> In
>>> which case, the total duration as it is computed now, is also wrong.
>>> Printing the durations out in the logs, and then parsing the logs to get
>>> the stream durations would require a big architectural change on my side.
>>> It would be far more convenient if I could get the stream durations from
>>> AVStream object.
>>>
>>> FFmpeg does write one seek entry for every cluster in the end of the
>>> file.
>>> I could possibly seek to  all the cluster seek entries, then try to find
>>> the last cluster for each track. But in the worst case, even this would
>>>   translate to demuxing of whole file because, suppose the audio is small
>>> enough to totally fit in one cluster , but the start of the cluster is a
>>> video packet to make sure that I have got the end of the audio stream I
>>> have to parse the whole cluster. Also it seems very complicated logic to
>>> determine the durations.
>>>
>>>
>> Still writing metadata to every single mkv muxed with ffmpeg to fix
>> your specific use-case seems rather terrible.
>> We shouldn't be writing extra metadata to serve one special use-case,
>> just so you can avoid a bit of code shuffling on your end.
>>
>> If you can clarify how this could be useful genericall, we might
>> consider it differently.
>
>
>
> Lot of people wish to have duration per stream in order to check that the
> transcoding did not change the duration of each stream (each stream may have
> different duration) or to get some more precise information about the file
> they receive (for example, some people want to reject files which do not
> have the same video and audio duration).
>
> I, as the developer of a tool which read such kind of metadata, am often
> asked to provide such information per stream for Matroska/WebM files, from
> different users, unfortunately I can not provide specific use case, I can
> only say it is a common request.
>
> But such request is usually binded with more metadata: data rate, count of
> bytes and count of frames. FLV (OP talked about FLV) metadata usually
> permits to get bit rate per stream (which is the most requested metadata on
> my side, which is not possible to get without additional metadata in
> Matroska when there is e.g. AVC and DTS-HD, 2 streams with variable
> bitrate).
> Additionally, mkvmerge already add such metadata since a couple of versions,
> I guess it is also due to a request from different users.
>
> That said, I don't like, as the developer of a tool which will read such
> added metadata, to have different methods for embedding such metadata.
> Why not using the same method and values (BPS, DURATION, NUMBER_OF_FRAMES,
> NUMBER_OF_BYTES, _STATISTICS_WRITING_APP, _STATISTICS_WRITING_DATE_UTC tags,
> non binary) as mkvmerge in order to have something which could be useful
> more generically?
>

I agree, if anything it should write values compatible to mkvmerge, if
mkvmerge already writes similar things.

- Hendrik


More information about the ffmpeg-devel mailing list