[FFmpeg-devel] [PATCH] libvpx: alt reference frame / lag
John Koleszar
jkoleszar
Tue Jun 15 18:18:29 CEST 2010
On Mon, Jun 14, 2010 at 9:25 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
> On Thu, Jun 10, 2010 at 12:03:49PM -0400, John Koleszar wrote:
>> On Thu, Jun 10, 2010 at 10:59 AM, Michael Niedermayer <michaelni at gmx.at> wrote:
>> > On Thu, Jun 10, 2010 at 09:49:21AM -0400, John Koleszar wrote:
>> >> On Wed, Jun 9, 2010 at 8:27 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>> >> > On Wed, Jun 09, 2010 at 12:08:59PM -0400, James Zern wrote:
>> >> >> On Tue, Jun 8, 2010 at 22:19, Michael Niedermayer <michaelni at gmx.at> wrote:
>> >> >> > On Tue, Jun 08, 2010 at 06:44:42PM -0400, James Zern wrote:
>> >> >> >> Index: libavcodec/libvpxenc.c
>> >> >> >> ===================================================================
>> >> >> >> --- libavcodec/libvpxenc.c ? ?(revision 23540)
>> >> >> >> +++ libavcodec/libvpxenc.c ? ?(working copy)
>> >> >> >> @@ -218,11 +218,21 @@ static av_cold int vp8_init(AVCodecConte
>> >> >> >> ? ? ?}
>> >> >> >> ? ? ?dump_enc_cfg(avctx, &enccfg);
>> >> >> >>
>> >> >> >> + ? ?/* With altref set an additional frame at the same pts may be produced.
>> >> >> >> + ? ? ? Increasing the time_base gives the library a window to place these frames
>> >> >> >> + ? ? ? ensuring strictly increasing timestamps. */
>> >> >> >> + ? ?if (avctx->flags2 & CODEC_FLAG2_ALT_REF) {
>> >> >> >> + ? ? ? ?avctx->ticks_per_frame = 2;
>> >> >> >> + ? ? ? ?avctx->time_base ? ? ? = av_mul_q(avctx->time_base,
>> >> >> >> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?(AVRational){1, avctx->ticks_per_frame});
>> >> >> >> + ? ?}
>> >> >> >
>> >> >> > this looks and sounds wrong
>> >> >> > the way divx/xvid packed b frames are handled is much saner
>> >> >> >
>> >> >> I'll have to have a closer look at that. From what I remember, these
>> >> >> were flagged within the container with each frame having a new header
>> >> >> to allow the frames to be broken up by the decoder correct?
>> >> >
>> >> > the primarely relevant part is that it does not have more frames at container
>> >> > level than there are output by the decoder.
>> >> > having the number of frames input in the decoder differ from the output could
>> >> > cause some problems. Similarly for the encoder. Its likely not unsolveable
>> >> > if its neccessary but as this is not common it likely would break a few
>> >> > applications.
>> >> > besides strictly speaking, frames that are not presented to the user have no
>> >> > presentation time
>> >> >
>> >>
>> >> There needs to be some framing applied to be able to separate the
>> >> frames, so I don't see anything wrong with using the container for
>> >> that, especially since it has a notion of invisible frames. Using a
>> >> different packing would be a topic for webm-discuss. In any case, this
>> >> time twiddling belongs in the muxer, IMO. I don't think there's
>> >> anything wrong with the encoder producing monotonically increasing
>> >> timestamps, but if the container needs them to be strictly increasing,
>> >> that's the muxer's problem IMO. I've yet to hear any real explanation
>> >> for why we can't just leave them as monotonically increasing in the
>> >> container, but I don't know anything about mkv/webm.
>> >
>> > the problem isnt mkv the problem are many other containers. If you
>> > dont want to support them, it might be a matter of disabling the
>> > strict monotonicity check and making tripple sure that every single
>> > frame has a timestamp, that is the player doesnt end up producing
>> > timestamps based on last+1/fps or other formulars that dont expect
>> > 2 frames with the same timestamp.
>> >
>>
>> I'm not sure what you're referring to by "disabling the strict
>> monotonicity check." The libvpx encoder should produce monotonically
>> increasing timestamps in any case by nature of processing the frames
>> only in order, if it doesn't that's a bug. The timestamps will be
>> strictly increasing if the timebase is of higher resolution than the
>> frame rate.
>
> the timebase is set by the user of libavcodec
> i dont think the current API allows an encoder to adjust it
>
>
I don't think the encoder should adjust the timebase specified by the
user either. I'm just describing what the behavior will be at
different resolutions.
>> The only requirement for the libvpx decoder is that the
>> frames be provided in the same order as they were produced, with the
>> same packetization. The timestamps are ignored. If you want to define
>> an encoding for combining multiple frames into a single packet I guess
>> it'd be useful to have a common way of doing this, but there's not a
>> lot in the bitstream that can really help here. I'm not clear -- are
>> you suggesting that these frames be combined into one packet in all
>> cases, or only if the container doesn't support it?
>
> iam suggesting to avoid doing something that no other codec does in
> practice AFAIK.
> I also would suggest you dont define multiple different bitstream formats
> where the choice depends on container. This tends to end messy
> first its messy as the encoder might not know in what it will be stored
> and second one might want to stream copy without reencoding from one
> container to another.
>
The encoder doesn't and shouldn't know what the user is going to do
with the data. People can and have defined their own transports.
Framing aside, I don't think it makes sense to combine the two packets
because they can be semantically different. One of the two could be
droppable, for example. Many applications want to treat these packets
independently (consider streaming, video conferencing). One example
might be to apply different error correction to the two packets. On
the decode side, it's useful for the application to be able to do work
between decoding the two packets. Stream copy shouldn't be any problem
as long as both transports don't make an assumption about the frame
being shown. You could support it with other containers with a
bitstream filter arrangement that knows how to split/combine packets.
I'm not sure this how much value there is in trying to support these
more restrictive containers/transports. It's interesting in the
academic sense, but how practical is it?
More information about the ffmpeg-devel
mailing list