[FFmpeg-devel] [PATCH 1/3] lavu: add a Vulkan hwcontext
Rostislav Pehlivanov
atomnuker at gmail.com
Tue Apr 3 06:17:09 EEST 2018
On 2 April 2018 at 21:24, Mark Thompson <sw at jkqxz.net> wrote:
> On 30/03/18 04:14, Rostislav Pehlivanov wrote:
> > This commit adds a Vulkan hwcontext, currently capable of mapping DRM and
> > VAAPI frames but additional functionality can be added later to support
> > importing of D3D11 surfaces as well as exporting to various other APIs.
>
> Assuming you haven't done this already, I think it would be a good idea to
> at least see it interoperating with one of the Windows APIs (D3D11 is
> probably most interesting) in case there is some unforeseen problem there
> (doesn't have to be in committable form if you don't want to check
> everything fully).
>
> > This context requires the newest stable version of the Vulkan API,
> > and once the new extension for DRM surfaces makes it in will also require
> > it (in order to properly and fully import them).
>
> What is the dependency for this extension? Presumably you need something
> in the headers - will that be present in any official headers after some
> version (including Windows, say), or does it need some platform dependency
> as well?
>
> > It makes use of every part of the Vulkan spec in order to ensure fastest
> > possible uploading, downloading and mapping of frames. On AMD, it will
> > also make use of mapping host memory frames in order to upload
> > very efficiently and with minimal CPU to hardware.
> >
> > To be useful for non-RGB images an implementation with the YUV images
> > extension is needed. All current implementations support that with the
> > exception of AMD, though support is coming soon for Mesa.
>
> I may have asked this before, but can you explain what is actually gained
> by using the combined images rather than separate planes? It seems
> unfortunate to be requiring it when it isn't present on all implementations.
It isn't present on all implementations because this is literally the very
first program to use them in any non-testing capacity. I had to solve many
issues, some driver related, some testing related, and they're still
ongoing, but the end result is that we save much code (this is 1/3rd
smaller than the OpenCL hwcontext, despite the very verbose API), we save
allocations (very expensive), we avoid memory fragmentation, we can map
back and forth to both APIs that handle images as one (VAAPI) and that
handle planes separately (VAAPI on AMD), we can use fixed function
conversion functions to convert to RGB. Honestly, given how good the API is
in this respect it would be a terrible idea and an awful hack to handle
planes separately. This isn't 1999, APIs know what hacks people had to do
with OpenGL and OpenCL to get functionality like this and they've done a
good job fixing them.
The only implementation to lack them as far as I know is Mesa on AMD. Intel
and NVIDIA support this on all platforms. And that's getting worked on
soon, since I let them know they can avoid the hard work of doing the
conversion and instead just implement the multiplane part of the API. Of
course they haven't implemented that yet, seeing as no one has been using
them, until now.
More information about the ffmpeg-devel
mailing list