[FFmpeg-devel] [RFC] libavfilter-soc: direct rendering
Stefano Sabatini
stefano.sabatini-lala
Thu Jun 11 01:20:35 CEST 2009
On date Sunday 2009-06-07 16:10:38 +0200, Stefano Sabatini encoded:
> I'll try to sketch the system as I currently understood it from the
> discussion.
>
> avfilter_get_video_buffer(link, dst->min_perms)
>
> request should be passed up to the sink of the chain. If the
> destination pad of the link defines a get_video_buffer() callback,
> then it should be used that, otherwise it will be called recursively
> avfilter_get_video_buffer() on the first output link of the filter.
>
> If the filter has no output links, then the
> avfilter_default_get_video_buffer() should be called, which will
> allocate a video buffer using av_malloc() and friends.
>
> The avfilter_get_video_buffer() may pass through some filter which may
> request to change the size of the video buffer, for example a pad
> filter.
>
> Let's consider this example:
>
> +----------+ +------------+ +-----------+
> | | link1 | | link2 | |
> | in +-------------+ pad +------------+ out |
> | (source) | (h1, w1) | (filter) | (h2, w2) | (sink) |
> +----------+ +------------+ +-----------+
>
> The "in" source invokes avfilter_get_video_buffer() on link1, which
> calls avfilter_get_video_buffer() on link2, since out doesn't define a
> get_video_buffer() in its input pad and it has no output links, then
> avfilter_default_get_video_buffer() is called.
>
> Now avfilter_get_video_buffer() allocates a buffer with size (h2, w2),
> which takes in consideration also the padding area.
>
> This can be achieved defining in the pad filter a config_links in the
> output pad, which redefines the h2 and w2 sizes using the relations:
>
> h2 = h1 + padtop + padbottom;
> w2 = w1 + padleft + padright;
>
> So at the end the in source will have an AVFilterPicRef with size (h2,
> w2). Since in is supposed to write in the non padded area, it has to
> know the information related to the padding, and so we need some
> mechanism to propagate this information.
>
> Now let's consider a more complicated example:
>
> +--------+ +---------+ +----------+ +------+ +------+
> | | link1 | | link2 | |link3 | | link4 | |
> | in +-----------+ pad1 +----------+ scale +-------+ pad2 +-------+ out |
> |(source)| (h1, w1) |(filter) | (h2, w2) | (filter)|(h3,w3)| |(h4,w4)| |
> +--------+ +---------+ +----------+ +------+ +------+
>
> In this case the scale input pad will define get_video_buffer callback
> which will allocate a buffer (using avfilter_default_get_buffer), and
> its start_frame function will request a frame to the rest of the
> chain.
>
> When the in source will be requested a frame by
> avfilter_request_frame, it will request the frame to the filter chain
> using the avfilter_get_video_buffer(), then it will update it
> using the padding information (to change the picref->data offsets),
> and finally will call avfilter_start_frame(), passing a picture
> reference to the following filter.
>
> Padding informations (padleft, padright, padtop, padbottom) may be
> stored in the link itself (for example when configuring the filters).
>
> This scheme should allow for direct rendering, since the out sink may
> directly provide a buffer to the source filter where to directly
> write, avoiding the initial memcpy.
>
> Would such a scheme fit well?
Attached patches implement this new system, try them with:
ffplay in.avi -vfilters "pad = exp_w=in_w+30 : exp_h=in_h + 30 : color = red,
pad = exp_w=in_w+30 : exp_h=in_h + 30 : color = blue,
pad = exp_w=in_w+30 : exp_h=in_h + 30 : color = yellow"
(no I didn't tested them extensively).
Attached patches, *does not still implement direct rendering*, direct
rendering should be made possible using this new get_video_buffer() API.
Let me consider the specific example of ffplay.c, which currently
does:
while (!(ret = get_video_frame(priv->is, priv->frame, &pts, &pkt)))
av_free_packet(&pkt);
if (ret < 0)
return -1;
/* FIXME: until I figure out how to hook everything up to the codec
* right, we're just copying the entire frame. */
picref = avfilter_get_video_buffer(link, AV_PERM_WRITE, 0, 0, link->w, link->h);
av_picture_copy((AVPicture *)&picref->data, (AVPicture *)priv->frame,
picref->pic->format, picref->w, picref->h);
av_free_packet(&pkt);
We want to avoid the initial memcpy, furthermore if the last frame is
an output device it may directly return a buffer from the display
buffer area, thus allowing for direct rendering.
Ideally we should be able to pass to get_video_frame() /
avcodec_decode_video2() the buffer when we want the decoded buffer to
be written.
Hints regarding the correct approach to follow are much appreciated, I
suppose I need to use the AVCodecContext::{get|release}_buffer, right?
Regards
--
FFmpeg = Free and Fierce Mastering Puritan Experimenting Guru
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lavfi-recursive-get-video-buffer.patch
Type: text/x-diff
Size: 11879 bytes
Desc: not available
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20090611/749ba60b/attachment.patch>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pad-implement.patch
Type: text/x-diff
Size: 14073 bytes
Desc: not available
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20090611/749ba60b/attachment-0001.patch>
More information about the ffmpeg-devel
mailing list