[FFmpeg-devel] Size of bufferqueue (was: [PATCH 2/2] lavfi: make filter_frame) non-recursive.

Nicolas George george at nsup.org
Sat Dec 24 01:42:29 EET 2016


Le tridi 3 nivôse, an CCXXV, Marton Balint a écrit :
> I guess we could buffer the undecoded packets instead of the decoded frames
> if there is a higher-than-usual delay between the streams. Is this also your
> plan?

Well, at some point in the far future I would have libavfilter capable
of handling AVMEDIA_TYPE_DATA frames and codecs as filters. If that
happens, then tweaking the scheduling to prioritize consuming large
frames first would be useful.

But that is not the issue at all.

The problem with bufferqueue is this:

For normal cases, cases that are supposed to work, the need of
bufferqueue is bounded. But with some codecs (those with huge frames
that get split, for example) or formats (depending on the A-V muxing
delay), it can sometimes be quite large. You observed that 64 is not
enough for your uses but 256 is. Probably, someone somewhere needs 1024
or more.

Increasing the size of the queue wastes a little memory. Not much, but
still not satisfactory. The obvious solution is to make it dynamic.
Inserting the fifo filter as input achieves it, because fifo has an
unlimited buffer, not implemented using bufferqueue. Bug fifo breaks the
scheduling because the buffer is hidden.

The new data structure I added to AVFilterLink a few days ago is exactly
what we need, it gives us a dynamic FIFO that works fine with the
scheduling. It still requires some code to permit filters to use it
directly (done, in my work tree), and then adapt amerge and the other
filters to do it (not yet done).

But there is a catch: sometimes people do something crazy, often
inadvertently, and that causes unlimited amounts of frames to accumulate
in the filter's input before any processing can be done.

Currently, when that happens, they see "buffer queue overflow" and the
encoding stops, and they come and ask for help on the mailing-list.

If the queue is not bounded, it will grow, eat all memory, then start
filling the swap, until the OOM-killer intervenes. Not good, really not
good.

My solution: make the queue unbounded in principle, but keep stats on
the use and add a configurable limit. Quite obvious, actually.

I have two bonus in my plans.

First, make the stats global to the filter graph. Instead of limiting to
(numbers completely arbitrary) 1000 frames on each of the 2 inputs of
each of the 4 instances of amerge (which would never be used, since when
all inputs on amerge have frames, amerge can process), limit to 8000
frames on the whole filter graph.

Second, keep stats on the approximate memory use of the frames, and set
a limit on that. That way, we can allow thousandths of 16-samples
frames, but bail after only a dozen 4k video frames.

Regards,

-- 
  Nicolas George
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20161224/5cf39a74/attachment.sig>


More information about the ffmpeg-devel mailing list