[FFmpeg-user] VAAPI Decoding/Encoding with C code

Mark Thompson sw at jkqxz.net
Fri Dec 2 17:03:51 EET 2016


On 02/12/16 14:05, Victor dMdB wrote:
> Thanks for the response Mark!
> 
> I might be misreading the source code, but for decoding, doesn't
> vaapi_decode_init
> do all the heavy lifting? 
> It seems to already call the functions and sets the variables you
> mentioned.
> 
> So might the code be just:
> 
> avcodec_open2(codecCtx, codec,NULL);
> vaapi_decode_init(codecCtx); 

vaapi_decode_init() is a function inside the ffmpeg utility, not any of the libraries.  You need to implement what it does in your own application.

You can copy/adapt the relevant code directly from the ffmpeg utility into your application if you want (assuming you comply with the licence terms).

- Mark


> On Fri, 2 Dec 2016, at 09:03 PM, Mark Thompson wrote:
>> On 02/12/16 10:57, Victor dMdB wrote:
>>> I was wondering if there were any examples of implementations with
>>> avformatcontext?
>>>
>>> I've looked at the source of ffmpeg vaapi implementation:
>>> https://www.ffmpeg.org/doxygen/trunk/ffmpeg__vaapi_8c_source.html
>>>
>>> and there is a reference to the cli values here
>>> https://ffmpeg.org/pipermail/ffmpeg-user/2016-May/032153.html
>>>
>>> But I'm not really sure how one actually implements the it within either
>>> decoding or encoding pipeline?
>>
>> Start by making an hwdevice.  This can be done with
>> av_hwdevice_ctx_create(), or if you already have a VADisplay (for
>> example, to do stuff in X via DRI[23]) you can use
>> av_hwdevice_ctx_alloc() followed by av_hwdevice_ctx_init().
>>
>> For a decoder:
>>
>> Make the decoder as you normally would for software.  You must set an
>> AVCodecContext.get_format callback.
>> Start feeding data to the decoder.
>> Once enough there is enough data to determine the output format, the
>> get_format callback will be called (this will always happen before any
>> output is generated).
>> The callback has a set of possible formats to use, this will contain
>> AV_PIX_FMT_VAAPI if your stream is supported (note that not all streams
>> are supported for a given decoder - for H.264 the hwaccel only supports
>> YUV 4:2:0 in 8-bit depth).
>> Make an hwframe context for the output frames and a struct vaapi_context
>> containing a decode context*.  See ffmpeg_vaapi.c:vaapi_decode_init() and
>> its callees for this part.
>> Attach your new hwframe context (AVCodecContext.hw_frames_ctx) and decode
>> context (AVCodecContext.hwaccel_context) to the decoder.
>> Once you return from the callback, decoding continues and will give you
>> AV_PIX_FMT_VAAPI frames.
>> If you need the output frames in normal memory rather than GPU memory,
>> you can copy them back with av_hwframe_transfer_data().
>>
>> For an encoder:
>>
>> Find an hwframe context to use as the encoder input.  For a transcode
>> case this can be the one from the decoder above, or it could be output
>> from a filter like scale_vaapi.  If only have frames in normal memory,
>> you need to make a new one here.
>> Make the encoder as you normally would (you'll need to get the codec by
>> name (like "h264_vaapi"), because it will not choose it by default with
>> just the ID), and set AVCodecContext.hw_frames_ctx with your hwframe
>> context.
>> Now feed the encoder with the AV_PIX_FMT_VAAPI frames from your hwframe
>> context.
>> If you only have input frames in normal memory, you will need to upload
>> them to GPU memory in the hwframe context with av_hwframe_transfer_data()
>> before giving them to the encoder.
>>
>>
>> - Mark
>>
>>
>> * It is intended that struct vaapi_context will be deprecated completely
>> soon, and this part will not be required (lavc will handle that context
>> creation).


More information about the ffmpeg-user mailing list