[FFmpeg-devel] [PATCH] libvpx: alt reference frame / lag
Reimar Döffinger
Reimar.Doeffinger
Tue Jun 15 23:14:53 CEST 2010
On Tue, Jun 15, 2010 at 04:23:27PM -0400, James Zern wrote:
> On Tue, Jun 15, 2010 at 14:51, Reimar D?ffinger
> <Reimar.Doeffinger at gmx.de> wrote:
> > On Tue, Jun 15, 2010 at 12:18:29PM -0400, John Koleszar wrote:
> >> The encoder doesn't and shouldn't know what the user is going to do
> >> with the data. People can and have defined their own transports.
> >>
> >> Framing aside, I don't think it makes sense to combine the two packets
> >> because they can be semantically different. One of the two could be
> >> droppable, for example. Many applications want to treat these packets
> >> independently (consider streaming, video conferencing). One example
> >> might be to apply different error correction to the two packets.
> >
> > Could maybe someone explain what exactly this is about?
> > You seem to be talking about two packets with the same time stamp that
> > a decoder might want to decode differently for each?
> > That makes no sense to me, if the have the same time stamp, they are
> > supposed to be displayed at the same time, which makes no sense and the
> > decoder really should output only one of them...
> >
> Essentially with this option the encoder will inject another reference
> frame at the same timestamp as the dependent frame (ignoring time_base
> changes) [1]. The question is how best to mux the frame as it has to
> be decoded prior to the one following it, but won't produce a frame on
> its own.
Well, so does the frame header. Or a lot of other data.
It may be convenient from a decoder side to treat it like a frame,
but this is just data that is part of one frame, not a separate frame.
> Other options seem to be framing the two packets or perhaps
> putting more dependence on the container, e.g., the invisible bit in
> the SimpleBlock header.
Why do you need anything at all? You have one single frame, consisting
of one data part that consists of data that instructs the decoder how
to build a special reference and another part that contains the "real"
frame.
I can't see why you would need framing, unless there's an encoder that
likes to put random data in-between those two parts and the length of
which is impossible to figure out...
More information about the ffmpeg-devel
mailing list