[FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl

Soft Works softworkz at hotmail.com
Tue Dec 28 21:04:02 EET 2021



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Tuesday, December 28, 2021 1:54 PM
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
> for mapping qsv frame to opencl
> 
> On 28/12/2021 01:17, Soft Works wrote:
> >> -----Original Message-----
> >> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> >> Thompson
> >> Sent: Tuesday, December 28, 2021 12:46 AM
> >> To: ffmpeg-devel at ffmpeg.org
> >> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a
> bug
> >> for mapping qsv frame to opencl
> >>
> >> On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
> >>>> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> >>>> Thompson
> >>>> Sent: Monday, December 27, 2021 7:51 PM
> >>>> To: ffmpeg-devel at ffmpeg.org
> >>>> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix
> a
> >> bug
> >>>> for mapping qsv frame to opencl
> >>>>
> >>>> On 16/11/2021 08:16, Wenbin Chen wrote:
> >>>>> From: nyanmisaka <nst799610810 at gmail.com>
> >>>>>
> >>>>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as
> well.
> >>>>> Now the following commandline works:
> >>>>>
> >>>>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
> >>>>> -init_hw_device qsv=qs at va -init_hw_device opencl=ocl at va -
> filter_hw_device
> >>>> ocl \
> >>>>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v
> h264_qsv
> >> \
> >>>>> -i input.264 -vf
> >> "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
> >>>> \
> >>>>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
> >>>>> -c:v h264_qsv output.264
> >>>>>
> >>>>> Signed-off-by: nyanmisaka <nst799610810 at gmail.com>
> >>>>> Signed-off-by: Wenbin Chen <wenbin.chen at intel.com>
> >>>>> ---
> >>>>>     libavutil/hwcontext_opencl.c | 3 ++-
> >>>>>     1 file changed, 2 insertions(+), 1 deletion(-)
> >>>>>
> >>>>> diff --git a/libavutil/hwcontext_opencl.c
> b/libavutil/hwcontext_opencl.c
> >>>>> index 26a3a24593..4b6e74ff6f 100644
> >>>>> --- a/libavutil/hwcontext_opencl.c
> >>>>> +++ b/libavutil/hwcontext_opencl.c
> >>>>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
> >>>> *dst_fc, AVFrame *dst,
> >>>>>     #if CONFIG_LIBMFX
> >>>>>         if (src->format == AV_PIX_FMT_QSV) {
> >>>>>             mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src-
> >>> data[3];
> >>>>> -        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
> >>>>> +        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
> >>>>> +        va_surface = *(VASurfaceID*)pair->first;
> >>>>>         } else
> >>>>>     #endif
> >>>>>             if (src->format == AV_PIX_FMT_VAAPI) {
> >>>>
> >>>> Since these frames can be user-supplied, this implies that the user-
> facing
> >>>> API/ABI for AV_PIX_FMT_QSV has changed.
> >>>>
> >>>> It looks like this was broken by using HDLPairs when D3D11 was
> introduced,
> >>>> which silently changed the existing API for DXVA2 and VAAPI as well.
> >>>>
> >>>> Could someone related to that please document it properly (clearly not
> all
> >>>> possible valid mfxFrameSurface1s are allowed), and note in APIchanges
> when
> >>>> the API change happened?
> >>>
> >>> Hi Mark,
> >>>
> >>> QSV contexts always need to be backed by a child context, which can be
> >> DXVA2,
> >>> D3D11VA or VAAPI. You can create a QSV context either by deriving from
> one
> >> of
> >>> those contexts or when create a new QSV context, it automatically creates
> >> an
> >>> appropriate child context - either implicitly (auto mode) or explicitly,
> >> like
> >>> the ffmpeg implementation does in most cases.
> >>
> >> ... or by using the one the user supplies when they create it.
> >>
> >>> When working with "user-supplied" frames on Linux, you need to create a
> >> VAAPI
> >>> context with those frames and derive a QSV context from that context.
> >>>
> >>> There is no way to create or supply QSV frames directly.
> >>
> >> ???  The ability for the user to set up their own version of these things
> is
> >> literally the whole point of the split alloc/init API.
> >>
> >>
> >> // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this
> >> code doesn't need to care about it.
> >>
> >> // It has a session and creates some surfaces to use with MemId filled
> >> compatible with ffmpeg.
> >> user_session = ...;
> >> user_surfaces = ...;
> >>
> >> // No ffmpeg involved before this, now we want to pass these surfaces
> we've
> >> got into ffmpeg.
> >>
> >> // Create a device context using the existing session.
> >>
> >> mfx_ctx = av_hwdevice_ctx_alloc(MFX);
> >>
> >> dc = mfx_ctx->data;
> >> mfx_dc = dc->hwctx;
> >> mfx_dc->session = user_session;
> >>
> >> av_hwdevice_ctx_init(mfx_ctx);
> >>
> >> // Create a frames context out of the surfaces we've got.
> >>
> >> mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);
> >>
> >> fc = mfx_frames->data;
> >> fc.pool = user_surfaces.allocator;
> >> fc.width = user_surfaces.width;
> >> // etc.
> >>
> >> mfx_fc = fc->hwctx;
> >> mfx_fc.surfaces = user_surfaces.array;
> >> mfx_fc.nb_surfaces = user_surfaces.count;
> >> mfx_fc.frame_type = user_surfaces.memtype;
> >>
> >> av_hwframe_ctx_init(frames);
> >>
> >> // Do stuff with frames.
> >
> > I wouldn't consider an mfxSession as an entity that could or should be
> > shared between implementations. IMO, this is not a valid use case.
> > A consumer of the mfx API needs to make certain choices regarding
> > the usage of the API, one of which is the way how frames are allocated
> > and managed.
> > This is not something that is meant to be shared between implementations.
> > Even inside ffmpeg, we don't use a single mfx session. We use separate
> > sessions for decoding, encoding and filtering that are joined together
> > via MFXJoinSession.
> > When an (ffmpeg-)API consumer is creating its own MFX session and its
> > own frame allocator implementation, it shouldn't be and allowed
> > scenario to create an ffmpeg hw context using this session.
> 
> The user is also aware of the thread safety rules.  I was giving the example
> above entirely serially to avoid that, but if they were using libmfx
> elsewhere in parallel with the above then indeed they would need a bit more
> care (create another session, some sort of locking to ensure serialisation)
> when passing it to ffmpeg.
> 
> > This shouldn't be considered a public API of ffmpeg because it doesn't
> > make sense to share an mfx session like this.
> 
> So, to clarify, your opinion is that none of hwcontext_qsv.h should be public
> API?  If it were actually removed (not visible in installed headers) then I
> agree that would fix the problem.

I think that exposing the mfx session handle is OK in order to allow an
API user to interop in a way that you create (and manage) your own session
and your own surface allocation, and use the exposes mfx session handle to
join the ffmpeg-created (and managed) session.

> > Besides that, is there any ffmpeg documentation indicating that
> > the memId field of mfxFrameSurface1 could be casted to VASurfaceId?
> 
> It is user-visible API, and it matched the behaviour of the libmfx examples
> for D3D9 (identical pointer to IDirect3DSurface9) and VAAPI (a compatible
> pointer to a structure with the VASurfaceID as the first member) so doing the
> obvious thing worked.

Also I think it's OK to expose the array of mfxSurface1 pointers allowing
to use them in a opaque way (without making use of and any assumptions
about the mfxMemId field).

This would allow an API user to interoperate with a joined session, e.g. 
taking frames from a joined ffmpeg session as input to a user-created 
session doing custom VPP operations where the output frames are from (and 
managed by) the user-created session.

On the other side, when and API user wants to access and interop with frames
directly, then the proper way would be to use derive-from or derive-to
(VAAPI, D3D9, D3D11) to get an appropriate frames context to work with.

That's how I would see it. What do you think?

Thanks,
softworkz







More information about the ffmpeg-devel mailing list