[FFmpeg-devel] [PATCH v4 03/11] libavutil/hwcontext_d3d11va: adding more texture information to the D3D11 hwcontext API

Mark Thompson sw at jkqxz.net
Sat May 9 17:37:57 EEST 2020


On 08/05/2020 21:26, Hendrik Leppkes wrote:
> On Fri, May 8, 2020 at 5:51 PM <artem.galin at gmail.com> wrote:
>>
>> From: Artem Galin <artem.galin at intel.com>
>>
>> Added AVD3D11FrameDescriptors array to store array of single textures in case if there is no way
>> to allocate array texture with BindFlags = D3D11_BIND_RENDER_TARGET.
>>
>> Signed-off-by: Artem Galin <artem.galin at intel.com>
>> ---
>>  libavutil/hwcontext_d3d11va.c | 26 ++++++++++++++++++++------
>>  libavutil/hwcontext_d3d11va.h |  9 +++++++++
>>  2 files changed, 29 insertions(+), 6 deletions(-)
>>
>> ...
>> diff --git a/libavutil/hwcontext_d3d11va.h b/libavutil/hwcontext_d3d11va.h
>> index 9f91e9b1b6..295bdcd90d 100644
>> --- a/libavutil/hwcontext_d3d11va.h
>> +++ b/libavutil/hwcontext_d3d11va.h
>> @@ -164,6 +164,15 @@ typedef struct AVD3D11VAFramesContext {
>>       * This field is ignored/invalid if a user-allocated texture is provided.
>>       */
>>      UINT MiscFlags;
>> +
>> +    /**
>> +     * In case if texture structure member above is not NULL contains the same texture
>> +     * pointer for all elements and different indexes into the array texture.
>> +     * In case if texture structure member above is NULL, all elements contains
>> +     * pointers to separate non-array textures and 0 indexes.
>> +     * This field is ignored/invalid if a user-allocated texture is provided.
>> +     */
>> +    AVD3D11FrameDescriptor *texture_infos;
>>  } AVD3D11VAFramesContext;
>>
> 
> 
> I'm not really a fan of this. Only supporting array textures was an
> intentional design decision back when D3D11VA was defined, because it
> greatly simplified the entire design - and as far as I know the
> d3d11va decoder, for example, doesnt even support decoding into
> anything else.

For an decoder, yes, because the set of things to render to can easily be constrained.

For an encoder, you want to support more cases then just textures generated by a decoder, and ideally that would allow arbitrary textures with the right properties so that the encoder is not weirdly gimped (compare NVENC, which does accept any texture).  The barrier to making that work is this horrible texture preregistration requirement where we need to be able to find all of the textures which might be used up front, not the single/array texture difference.  While changing the API here is not fun, following the method used for the same problem with D3D9 surfaces seems like the simplest way to make it all work nicely.

Possibly I am not understanding something here, though - I don't see what this has to do with the setting of D3D11_BIND_RENDER_TARGET (and in particular why the code discards the array index if this flag is set).

- Mark


More information about the ffmpeg-devel mailing list