[FFmpeg-devel] [PATCH] libavu: add pkt_timebase to AVFrame.
Michael Niedermayer
michaelni at gmx.at
Sun Jul 20 17:58:08 CEST 2014
On Sun, Jul 20, 2014 at 05:27:16PM +0200, wm4 wrote:
> On Sun, 20 Jul 2014 17:01:42 +0200
> Michael Niedermayer <michaelni at gmx.at> wrote:
[...]
> > also some codecs allocate multiple AVFrames with different dimensions
> > hevc is one.
> > and hypothetical future support of things like spatial scalability
> > would also need internal buffers of different dimensions
> > and temporal scalability could maybe slightly benefit from
> > AVFrames with a different timebase, well maybe iam drifting too much
> > into "what if" here ... iam not sure
> >
> >
> > >
> > > Adding the timebase to AVFrame and AVPacket would be reasonable if it's
> > > guaranteed that other parts of the libraries interpret them properly.
> > > But they don't (because the timebase is a fixed constant over a stream
> > > in the first place), and adding this field would lead to a confusing
> > > situation where some parts of the code set the pkt_timebase, some maybe
> > > even interpret it, but most code silently ignores it. The API is much
> > > cleaner without this field.
> >
> > i dont disagree at all but as is we have these packet related fields
> > and can in half our code not use / interpret it because we dont know
> > the timebase
>
> It's simple: the timebase of the AVStream of the demuxer is the
> timebase. Maybe you should remove less useful timebase fields instead.
> For example, libavcodec absolutely has no business to interpret
> timestamps (the only reason by lavc needs to deal with timestamps at
> all is frame reordering). Maybe that would be better than adding even
> more of these confusing timebases.
Motion estimation needs accurate timestamps to interpolate and
extrapolate motion vectors between frames, idependant of codec
And video codecs starting with h263+ and in practice mpeg4 need
timestamps for direct mode MBs in b frames to interpolate MVs at the
encoder AND decoder. For the decoder they are available from the codec
bitstream.
also mpeg1/2 has frame durations at codec level and full timestamps
at muxer level, later video codecs similarly have field repeat flags
at decoder level and no requirement for timestamps on every frame
at muxer level so you have to combine them if you want a accurate
timestamp for each frame in some cases.
So if for example you wanted to encode some variable frame rate /mixed
telecine material to mpeg2 and not violate the spec, then setting the
field repeat flags at codec level correctly needs accurate timestamps
There are also probably a few issues with subtitle decoders if
timestamps would be considered opaque objects
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Everything should be made as simple as possible, but not simpler.
-- Albert Einstein
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 181 bytes
Desc: Digital signature
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20140720/31a3b6c9/attachment.asc>
More information about the ffmpeg-devel
mailing list