[FFmpeg-devel] [RFC] libavfilter-soc: direct rendering
Stefano Sabatini
stefano.sabatini-lala
Sun Jun 7 16:10:38 CEST 2009
On date Monday 2009-06-01 16:59:55 +0200, Michael Niedermayer encoded:
> On Sun, May 31, 2009 at 05:09:51PM +0200, Vitor Sessak wrote:
> > Michael Niedermayer wrote:
> >> On Sat, May 30, 2009 at 09:31:06PM +0200, Vitor Sessak wrote:
> >>> Michael Niedermayer wrote:
> >>>> On Sat, May 30, 2009 at 06:37:55PM +0200, Vitor Sessak wrote:
> >>>>> Michael Niedermayer wrote:
> >>>>>> On Sat, May 30, 2009 at 06:25:37PM +0200, Vitor Sessak wrote:
> >>>>>>> Michael Niedermayer wrote:
> >>>>>>>> On Sat, May 30, 2009 at 03:44:00PM +0200, Vitor Sessak wrote:
> >>>>>>>>> Michael Niedermayer wrote:
> >>>>>>>>>> On Fri, May 22, 2009 at 02:31:57PM +0200, Vitor Sessak wrote:
> >>>>>>>>>>> Stefano Sabatini wrote:
> >>>>>>>>>>>> On date Thursday 2009-05-21 23:20:51 +0200, Stefano Sabatini
> >>>>>>>>>>>> encoded:
> >>>>>>>>>>>>> On date Wednesday 2009-05-20 20:42:21 +0200, Vitor Sessak
> >>>>>>>>>>>>> encoded:
> >>>>>>>>>>>> [...]
> >>>>>>>>>>>>>> I suppose you didn't test the changes to ffmpeg.c, unless you
> >>>>>>>>>>>>>> forgot to attach the patch for vsrc_buffer.c. I imagine that
> >>>>>>>>>>>>>> here handling avfilter_request_frame() without memcpy'ing the
> >>>>>>>>>>>>>> whole frame (as is done in ffplay.c) would be non trivial.
> >>>>>>>>>>>> In attachment an updated patch with the missing changes to
> >>>>>>>>>>>> vsrc_buffer.c.
> >>>>>>>>>>>> Can someone suggest how would be possible to avoid the initial
> >>>>>>>>>>>> frame
> >>>>>>>>>>>> -> picref memcpy?
> >>>>>>>>>>> What non lavfi-patched ffmpeg.c does now is:
> >>>>>>>>>>>
> >>>>>>>>>>> 1- allocs a frame with the padding specified by command-line opts
> >>>>>>>>>>> -padXXXX
> >>>>>>>>>>> 2- decodes the frame to this buffer. Note that this buffer might
> >>>>>>>>>>> need to be reused for ME.
> >>>>>>>>>>>
> >>>>>>>>>>> what I suggest:
> >>>>>>>>>>>
> >>>>>>>>>>> a) For the first frame
> >>>>>>>>>>> 1- ffmpeg.c allocs a frame with no padding.
> >>>>>>>>>>> 2- libavfilter request a frame with padding px,py.
> >>>>>>>>>>> 3- ffmpeg.c allocs a frame with padding px, py, copies the frame
> >>>>>>>>>>> to it and free the replaces (freeing) the old frame by the new
> >>>>>>>>>>> 4- ffmpeg.c passes the new frame to the filter framework
> >>>>>>>>>>>
> >>>>>>>>>>> b) For the next frame
> >>>>>>>>>>> 5- ffmpeg.c decodes the frame with padding px, py
> >>>>>>>>>>> 6- libavfilter request a frame with padding px2, py2
> >>>>>>>>>>> 7- if (px2 > px || py2 > py) alloc another frame and memcpy the
> >>>>>>>>>>> pic to it (and set px = px2; py = py2;). if not, just send the
> >>>>>>>>>>> frame pointer to libavfilter
> >>>>>>>>>> 1 - the decoder which is pretty much a filter with no input
> >>>>>>>>>> requests
> >>>>>>>>>> from the next filter a buffer.
> >>>>>>>>>> 1b- the next filter can pass this request up until to the video
> >>>>>>>>>> output
> >>>>>>>>>> device in principle or return a buffer. If this request passes
> >>>>>>>>>> a
> >>>>>>>>>> "pad" filter it is modified accordingly.
> >>>>>>>>>> 2 - the decoder decodes into this frame.
> >>>>>>>>>> Which part of that are you not understanding
> >>>>>>>>> I probably was missing that there is no decoder that need not only
> >>>>>>>>> to preserve, but to output to the same data pointers of the last
> >>>>>>>>> frame.
I'll try to sketch the system as I currently understood it from the
discussion.
avfilter_get_video_buffer(link, dst->min_perms)
request should be passed up to the sink of the chain. If the
destination pad of the link defines a get_video_buffer() callback,
then it should be used that, otherwise it will be called recursively
avfilter_get_video_buffer() on the first output link of the filter.
If the filter has no output links, then the
avfilter_default_get_video_buffer() should be called, which will
allocate a video buffer using av_malloc() and friends.
The avfilter_get_video_buffer() may pass through some filter which may
request to change the size of the video buffer, for example a pad
filter.
Let's consider this example:
+----------+ +------------+ +-----------+
| | link1 | | link2 | |
| in +-------------+ pad +------------+ out |
| (source) | (h1, w1) | (filter) | (h2, w2) | (sink) |
+----------+ +------------+ +-----------+
The "in" source invokes avfilter_get_video_buffer() on link1, which
calls avfilter_get_video_buffer() on link2, since out doesn't define a
get_video_buffer() in its input pad and it has no output links, then
avfilter_default_get_video_buffer() is called.
Now avfilter_get_video_buffer() allocates a buffer with size (h2, w2),
which takes in consideration also the padding area.
This can be achieved defining in the pad filter a config_links in the
output pad, which redefines the h2 and w2 sizes using the relations:
h2 = h1 + padtop + padbottom;
w2 = w1 + padleft + padright;
So at the end the in source will have an AVFilterPicRef with size (h2,
w2). Since in is supposed to write in the non padded area, it has to
know the information related to the padding, and so we need some
mechanism to propagate this information.
Now let's consider a more complicated example:
+--------+ +---------+ +----------+ +------+ +------+
| | link1 | | link2 | |link3 | | link4 | |
| in +-----------+ pad1 +----------+ scale +-------+ pad2 +-------+ out |
|(source)| (h1, w1) |(filter) | (h2, w2) | (filter)|(h3,w3)| |(h4,w4)| |
+--------+ +---------+ +----------+ +------+ +------+
In this case the scale input pad will define get_video_buffer callback
which will allocate a buffer (using avfilter_default_get_buffer), and
its start_frame function will request a frame to the rest of the
chain.
When the in source will be requested a frame by
avfilter_request_frame, it will request the frame to the filter chain
using the avfilter_get_video_buffer(), then it will update it
using the padding information (to change the picref->data offsets),
and finally will call avfilter_start_frame(), passing a picture
reference to the following filter.
Padding informations (padleft, padright, padtop, padbottom) may be
stored in the link itself (for example when configuring the filters).
This scheme should allow for direct rendering, since the out sink may
directly provide a buffer to the source filter where to directly
write, avoiding the initial memcpy.
Would such a scheme fit well?
Regards.
More information about the ffmpeg-devel
mailing list