[FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When deriving a hwdevice, search for existing device in both directions

Soft Works softworkz at hotmail.com
Mon Jan 10 03:40:25 EET 2022



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> Thompson
> Sent: Monday, January 10, 2022 1:57 AM
> To: ffmpeg-devel at ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When deriving a
> hwdevice, search for existing device in both directions
> 
> On 09/01/2022 23:36, Soft Works wrote:>> -----Original Message-----
> >> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> >> Thompson
> >> Sent: Monday, January 10, 2022 12:13 AM
> >> To: ffmpeg-devel at ffmpeg.org
> >> Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When
> deriving a
> >> hwdevice, search for existing device in both directions
> >>
> >> On 09/01/2022 21:15, Soft Works wrote:
> >>>> -----Original Message-----
> >>>> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Mark
> >>>> Thompson
> >>>> Sent: Sunday, January 9, 2022 7:39 PM
> >>>> To: ffmpeg-devel at ffmpeg.org
> >>>> Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When
> >> deriving a
> >>>> hwdevice, search for existing device in both directions
> >>>>
> >>>> On 05/01/2022 03:38, Xiang, Haihao wrote:
> >>>>> ... this patch really fixed some issues for me and others.
> >>>>
> >>>> Can you explain this in more detail?
> >>>>
> >>>> I'd like to understand whether the issues you refer to are something
> which
> >>>> would be fixed by the ffmpeg utility allowing selection of devices for
> >>>> libavfilter, or whether they are something unrelated.
> >>>>
> >>>> (For library users the currently-supported way of getting the same
> device
> >>>> again is to keep a reference to the device and reuse it.  If there is
> some
> >>>> case where you can't do that then it would be useful to hear about it.)
> >>>
> >>> Hi Mark,
> >>>
> >>> they have 3 workaround patches on their staging repo, but I'll let Haihao
> >>> answer in detail.
> >>>
> >>> I have another question. I've been searching high and low, yet I can't
> >>> find the message. Do you remember that patch discussion from (quite a
> >>> few) months ago, where it was about another QSV change (something about
> >>> device creation from the command line, IIRC). There was a command line
> >>> example with QSV and you correctly remarked something like:
> >>> "Do you even know that just for this command line, there are 5 device
> >>> creations happening in the background, implicit and explicit, and in
> >>> one case (or 2), it's not even creating the specified device but
> >>> a session for the default device instead"
> >>> (just roughly from memory)
> >>>
> >>> Do you remember - or was it Philip?
> >>
> >> <https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2021-March/277731.html>
> >>
> >>> Anyway, this is something that the patch will improve. There has been one
> >>> other commit since that time regarding explicit device creation from
> >>> Haihao (iirc), which already reduced the device creation and fixed the
> >>> incorrect default session creation.
> >>
> >> Yes, the special ffmpeg utility code to work around the lack of
> >> AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX in the libmfx decoders caused
> >> confusion by working differently to everything else - implementing that
> and
> >> getting rid of the workarounds was definitely a good thing.
> >>
> >>> My patch tackles this from another side: at that time, you (or Philip)
> >>> explained that the secondary context that QSV requires (VAAPI, D3Dx)
> >>> and that is initially created when setting up the QSV device, does not
> >>> even get used when subsequently deriving to a context of that kind.
> >>> Instead, a new device is being created in this case.
> >>>
> >>> That's another scenario which is fixed by this patch.
> >>
> >> It does sound like you just always want a libmfx device to be derived from
> >> the thing which is really there sitting underneath it.
> >
> > "That's another scenario which is fixed by this patch"
> >
> > Things stop working as expected as soon as you are working with 3 or more
> > derived hw devices and neither hwmap nor hwmap-revere can get you to the
> > context you want.
> 
> And your situation is sufficiently complex that specifying devices explicitly
> is probably what you want, rather that trying to trick some implicit route
> into returning the answer you already know.
> 
> >> If you are a library user then you get the original hw context by reusing
> the
> >> reference to it that you made when you created it.  This includes
> libavfilter
> >> users, who can provide a hw device to each hwmap separately.
> >>
> >> If you are an ffmpeg utility user then I agree there isn't currently a way
> to
> >> do this for filter graphs, hence the solution of providing an a way in the
> >> ffmpeg utility to set hw devices per-filter.
> >
> > just setting the context on a filter doesn't make any sense, because you
> need
> > the mapping. It only makes sense for the hwmap and hwupload filters.
> 
> Yes?  The filters you need to give it to are the hwmap and hwupload filters,
> the others indeed don't matter (though they are blindly set at the moment
> because there is no way to know they don't need it).
> 
> >>> Anyway I'm wondering whether it can even be logically valid to derive
> >>> from one device to another and then to another instance of the previous
> >>> device type.
> >>>   From my understanding, "deriving" or "hw mapping" from one device to
> >>> another means to establish a different way or accessor to a common
> >>> resource/data, which means that you can access the data in one or the
> >>> other way.
> >>>
> >>> Now let's assume a forward device-derivation chain like this:
> >>>
> >>> D3D_1  >>  OpenCL_1  >>  D3D_2
> >>
> >> You can't do this because device derivation is unidirectional (and
> acyclic) -
> >> you can only derive an OpenCL device from D3D (9 or 11), not the other way
> >> around.
> >>
> >> Similarly, you can only map frames from D3D to OpenCL.  That's why the
> hwmap
> >> reverse option exists, because of cases where you actually want the other
> >> direction which doesn't really exist.
> >
> > Yes, all true, but my point is something else: you can't have several
> context
> > of the same type in a derivation chain.
> 
> That's a consequence of unidirectionality + acyclity, yes.
> 
> > And that's exactly what this patch addresses: it makes sure that you'll get
> > an existing context instead of ffmpeg trying to derive to a new hw device
> > which doesn't work anyway.
> 
> I'm still only seeing one case where this bizarre operation is wanted: the
> ffmpeg utility user trying to get devices into the right place in their
> filter graphs, who I still think would be better served by being able to set
> the right device directly on hwmap rather than implicitly through searching
> derivation chains.
> 
> >>> We have D3D surfaces, then we share them with OpenCL. Both *_1
> >>> contexts provide access to the same data.
> >>> Then we derive again "forward only" and we get a new D3D_2
> >>> context. It is derived from OpenCL_1, so it must provide
> >>> access to the same data as OpenCL_1 AND D3D_1.
> >>>
> >>> Now we have two different D3D contexts which are supposed to
> >>> provide access to the same data!
> >>>
> >>>
> >>> 1. This doesn't even work technically
> >>>      - neither from D3D (iirc)
> >>>      - nor from ffmpeg (not cleanly)
> >>>
> >>> 2. This doesn't make sense at all. There should always be
> >>>      only a single device context of a device type for the same
> >>>      resource
> >>>
> >>> 3. Why would somebody even want this - what kind of use case?
> >>
> >> The multiple derivation case is for when a single device doesn't work.
> >> Typically that involves multiple separate components which don't want to
> >> interact with the others, for example:
> >>
> >> * When something thread-unsafe might happen, so different threads need
> >> separate instances to work with.
> >
> > Derivation means accessing shared resources (computing and memory), and
> > you can't solve a concurrency problem by having two devices accessing
> > the same resources - this makes it even worse (assuming a device would
> > even allow this).
> 
> Device derivation means making a compatible context of a different type on
> the same physical device.
> 
> Now that's probably intended because you are going to want to share some
> particular resources, but exactly what can be shared and what is possible is
> dependent on the setup.
> 
> Similarly, any rules for concurrency are dependent on the setup - maybe you
> can't do two specific things simultaneously in the same device context and
> need two separate ones to solve it, but they still both want to share in some
> way with the different device context they were derived from.
> 
> >> * When global options have to be set on a device, so a component which
> does
> >> that needs its own instance to avoid interfering with anyone else.
> >
> > This is NOT derivation. This case is not affected.
> 
> Suppose I have some DRM frames which come from somewhere (some hardware
> decoder, say - doesn't matter).
> 
> I want to do Vulkan things with some of the frames, so I call
> av_hwdevice_ctx_create_derived_opts() to get a compatible Vulkan context.
> Then I can map and do Vulkan things, yay!
> 
> Later on, I want to do some independent Vulkan thing with my DRM frames.  I
> do the same operation again with different options (because my new thing
> wants some new extensions, say).  This returns a new Vulkan context and I can
> work with it completely independently, yay!

You are describing the creation of a Vulkan context with other parameters
with which you can work independently.

That's not my understanding of deriving a context. I don't the implementation
in case of Vulkan, but in case of the others, deriving is about sharing 
resources. And when you share resources, you can't "work with it independently",
so what you're talking about is not a scenario of deriving a context.


To wrap things up a bit: 

- you want an approach which requires even more complicated filter 
  command lines.
  I have understood that. It is a possible addition for filter command lines
  and I would even welcome somebody who would implement precise hw context 
  selection for hwdownload  and also for hwmapn for (rare) cases where this 
  might be needed. (that somebody won't be me, though)

- but this is not what this patchset is about. it is about having things
  working nicely and automatically in a way as one would expect it instead 
  of failing. this patchset only touches and changes behavior in those cases 
  that are currently failing anyway

- Or can you name any realistic use case that this patchset would break?
  (if yes, let's please go through a specific example with pseudo code)


Maybe Haihao's reply will be more convincing.
It might also be interesting what the Vulkan guys are thinking about it
(as there has been some feedback from that side earlier).

Kind regards,
softworkz



More information about the ffmpeg-devel mailing list