[MPlayer-G2-dev] bypassing PTS in video filter layer
D Richard Felker III
dalias at aerifal.cx
Fri Aug 1 20:41:49 CEST 2003
On Fri, Aug 01, 2003 at 08:11:25PM +0200, Arpi wrote:
> I have a problem. (who has no problems? :))
> Currently the PTS info (absolute and relative timestamp, duration time)
> is stored in the mpi (mp_image_t) structure while going through the
> video filter layer. It's nice, and makes it's easy to handle frame delaying,
> reordering. But what happens at skipped/dropped frames? PTS info also gets
> dropped, so we end up guessing skipped time at the a-v sync core.
> The issue is already visible when playing that 405*.avi sample from mphq ftp.
> (it begins with around 4 seconds of zero (black, dropped) frames)
> I have 2 ideas to solve this, none of them is nice ;(
> method 1:
> always return an mpi, never return NULL (except for errors, maybe).
> also introduce a new mpi type, called NULL or ZERO.
> it means it only holds 'the place of picture', ie. timestamp and frame
> counter, but no actual image data.
> method 2:
> create a pts_info structure, and use it to hold PTS info through filters.
> it needs filter API changes, and i have no nice idea how to store/pass
> this data. (teh function return values is mpi or NULL, so it could be
> an offset-type parameter like process_image(mpi,&ptsinfo) maybe, but
> it makes frame delaying/reordering (including attaching ptsinfto to
> mpi frames) tricky)
> any better ideas?
IMO it should be much simpler than this. If the frames are being
dropped at the codec/demuxer end (because of 0-byte skipped frames in
.avi container and such, or nonrecoverable decoding errors or
whatever) then the codec/demuxer layer or the wrapper between codec
and vf layer should just adjust the relative_pts of the next frame it
sends out to account for any dropped frames in between.
[Hmm, this is another reason why relative_pts is much better than
duration, since you can't lengthen the duration of the previous frame
before knowing there'll be dropped frames after it. On the other hand,
if you have both duration and relative_pts, which may be inconsistent
when frames are dropped, that could be helpful, since it would let
filters know both the original *intended* duration of a frame, and how
long it actually has to stay on the screen before the next frame is
shown, due to drops.]
On the other hand, if the frames are being dropped by a filter, it
seems to me that the filter should be responsible for pts accounting.
This is already the case for filters like inverse telecine that don't
actually 'drop' frames, but which have a non-1-to-1 correspondence
between input and output frames, so it seems like it'd be ok for other
filters (dint, decimate, etc.) which drop frames to do the same.
BTW, I was thinking that (non-hard) framedrop for slow cpu and for
output to fixed-fps container (for encoding) or vo should be an actual
filter, rather than just part of the player/encoder loop. Then the
user could insert it at an arbitrary position in the chain --
certainly after inverse telecine or temporal noise filters, but before
expensive swscaler, etc.
More information about the MPlayer-G2-dev