[FFmpeg-devel] [PATCH] libvpx: alt reference frame / lag
John Koleszar
jkoleszar
Wed Jun 16 20:52:17 CEST 2010
On Wed, Jun 16, 2010 at 12:56 PM, Reimar D?ffinger
<Reimar.Doeffinger at gmx.de> wrote:
> On Wed, Jun 16, 2010 at 11:06:27AM -0400, John Koleszar wrote:
>> Consider a packing that required you put a b-frame into the same
>> packet as a p-frame. Every disadvantage I can think of for such a
>> system applies to putting ARFs into the same packet as another frame.
>> You'd need non-trivial queuing on the encoder, which introduces
>> latency. You'd need asynchronous or partial decoding on the decoder,
>> which may or may not have an unusual API, or have to accept additional
>> latency/jitter. Applications that did want to get at the frames
>> individually would need to duplicate the sort of parsing that's
>> usually done by a demuxer.
>
> What are you talking about? A P-frame is always displayed,
> a ARF is never displayed (there is no reason to even call it a frame),
> thus _not a single_ of the problems of packed B-frame applies.
> Yes, having the ARF separate means you have a chance of cannibalizing
> from the decoding time of the previous frame, however for the worst
> case the existence of ARF means your decoder must be able to handle
> twice the frame rate and twice the bitrate of a stream without.
> No amount of regrouping the data is going to change that (as said,
> for the worst case).
>
The bitrate doesn't change because of the ARFs. Obviously you're going
to spend the same number of cycles decoding the data, it's just a
function of how you break them up. Allowing the application to run
more frequently is always a good thing, especially on embedded
platforms, for single threaded applications.
These frames are frames in every sense except that they're not
displayed on their own. They're not just a fancy header. Here's an
example: You can have an ARF that's taken from some future frame and
not displayed. Then later, when that source frame's PTS is reached,
code a non-ARF frame that has no residual data at all, effectively a
header saying "present the ARF buffer now." Which packet do you call
the "frame" and which is the "header" in that case?
>> A packet stream is a clean abstraction that everybody
>> understands, the only twist here is that not all packets are
>> displayed.
>
> That argument works just as well for claiming that e.g. for JPEG
> the SOI, EOI etc. should each be in a separate packet.
> Or that for H.264 each slice should go into its own packet, after
> all someone might want to decode only the middle slice for some
> reason.
That data is all related to the same frame. An ARF is not necessarily
related to the frame preceding or following it. There are existing
applications that very much care about the contents of each reference
buffer and what's in each packet, this isn't a hypothetical like
decoding a single slice. If you want to take the argument ad
absurdium, you could say you could put every bit into its own packet.
But all those bits and thus packets are related, so that would be
silly to split them up. ARFs are independent entities, so it's not
abusing the metaphor to argue they belong in their own packet.
More information about the ffmpeg-devel
mailing list