[MPlayer-dev-eng] [PATCH] libass: fix parsing of tracks extracted from containers

Uoti Urpala uoti.urpala at pp1.inet.fi
Wed Sep 10 15:21:25 CEST 2008


On Wed, 2008-09-10 at 13:55 +0200, Michael Niedermayer wrote:
> On Sun, Sep 07, 2008 at 06:23:11PM +0300, Uoti Urpala wrote:
> > And even if you thought lavf users wouldn't need it, dropping the
> > information when you remux mkv->mkv is bad unless you can be sure nobody
> > anywhere has a use for it (which I think they do).
> 
> If readorder contains information that is usefull for a realistic case
> Then we can try to find a way to export it.
> Though that requires some realistic use case to be found first.

It allows easily comparing a .ass file that has gone through muxing to a
multimedia container and back to one that hasn't. I've done that even
though I haven't done much related to editing subtitles.

Your attitude seems to be "We're not too familiar with the area where
this information could be useful, but we'll assume it is useless and
delete it from files muxed though FFmpeg unless someone who knows better
actively comes to tell us why we shouldn't do that". I think you should
at least do some checking before deciding to delete information that has
been deliberately added to the files.

> > think your proposal to move all timing information to the codec level to
> > avoid the container limitations is problematic. It is the opposite of
> > what is done with audio and video. Audio has an implicit duration based
> 
> mpeg2 video does have a duration field in its bitstream, quite well known
> for things like telecine.

Ivan already answered this, and it's not at all similar anyway.

> vorbis packets also contain their duration.

They contain their number of samples. But you don't need that duration
to time any display changes. Nothing special happens when the duration
ends. I think this post that I wrote to ffmpeg-devel earlier should
adequately explain why duration is important for subtitles (the formats
that express a subtitle with a single packet) in a way that differs from
audio and video:
http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2008-September/053198.html
So subtitle packets can have more timing data that belongs to the
demuxer level.

> > IMO a better solution would be to keep timing information in
> > demuxer-level fields and out of codec data. Conversion functions can be
> > provided for the case where you need to push the timing information to
> > the codec level because the container has no better place for it.
> 
> There are 2 big problems here
> 1. its all based on your false assumtation that this information is not
>    in codec packets.

It's not an "assumption". It's what I propose to do: use an internal
format for subtitle packets that keeps the timing information out of the
bitstream. If you use the Matroska format internally for SSA/ASS
subtitle packets (converting to it in case some new storage format
appears that uses a different representation) then it _will_ be true
that the timing information is not inside the bitstream.

> 2. its based on the assumtion that there are just start and end time.
>    Iam not sure if there really is no subtitle format that cannot apply
>    some effects at intermediat times, and these times would be in the
>    codec bitstream, it would be rater messy to attempt to move this to
>    demuxer packets.

Yes if there a lot of other information then it has to be left inside
the bitstream in practice. Actually SSA/ASS can have offsets from
start/end in milliseconds for move effects, but they're not used often
AFAIK. Instead of adding fields to demuxer packets you could represent
those as percentage of duration instead so any linear transformation of
the timeline would always work without editing the packets. Anyway I
think that's a rather minor issue (nothing like the "big problem" you
labeled it as). It's not anything that would cause practical problems
with current formats.




More information about the MPlayer-dev-eng mailing list