[FFmpeg-devel] [PATCH] examples/vaapi_enc: Add a VAAPI encoding example.

Mark Thompson sw at jkqxz.net
Mon Jul 31 15:09:18 EEST 2017


On 31/07/17 04:10, Jun Zhao wrote:
> On 2017/7/30 8:07, Mark Thompson wrote:
>> On 28/07/17 07:01, Jun Zhao wrote:
>>> From d5414b451012b3a0169740a26f452785eb44cce5 Mon Sep 17 00:00:00 2001
>>> From: Jun Zhao <jun.zhao at intel.com>
>>> Date: Fri, 28 Jul 2017 01:39:27 -0400
>>> Subject: [PATCH] examples/vaapi_enc: Add a VAAPI encoding example.
>>>
>>> Add a VAAPI encoding example.
>>>
>>> Use hwupload loading the raw date in HW surface, usage
>>> like this: ./vaapi_enc 1920 1080 input.yuv test.h264
>>>
>>> Signed-off-by: Liu, Kaixuan <kaixuan.liu at intel.com>
>>> Signed-off-by: Jun Zhao <jun.zhao at intel.com>
>>> ---
>>>  doc/examples/vaapi_enc.c | 291 +++++++++++++++++++++++++++++++++++++++++++++++
>>>  1 file changed, 291 insertions(+)
>>>  create mode 100644 doc/examples/vaapi_enc.c
>>
>> A general thought: do you actually want to use lavfi here?  All it's really doing is the hw frame creation and upload, which would be shorter to implement directly (av_hwframe_ctx_create(), av_hwframe_ctx_init(), av_hwframe_transfer_data()).  If the example might be extended with more stuff going on in filters then obviously the lavfi stuff is needed, but it seems overcomplicated if the intent is just to demonstrate encode.
> 
> As the API view, I don't want to use lavfi for VAAPI NEC example, I prefer 
> a simple API or simple step than use lavfi to load YUV from CPU to GPU surface,
> 
> Can we give a simple API or step to load YUV to HW surface in this case ? even use
> av_hwframe_xxx interface, it's not a easy task for the caller.

Well, what sort of API would you prefer?

Currently the actions to take here are:
* Allocate a new hardware frames context to contain the surfaces [av_hwframe_ctx_create()].
* Set the parameters for your surfaces - format and dimensions, pool size if needed, anything API-specific.
* Initialise the context with those parameters [av_hwframe_ctx_init()].
* Set the new context on the encoder for it to use [AVCodecContext.hw_frames_ctx].
* Then, for each frame:
** Allocate a new surface from the frames context [av_hwframe_get_buffer()].
** Copy the software frame data to the surface [av_hwframe_transfer_data()].
** Send the hardware frame to the encoder [avcodec_send_frame()].

It's not clear to me that any of those parts are sensibly mergable for the user without obscuring what is actually happening.


More information about the ffmpeg-devel mailing list