[FFmpeg-devel] [PATCH 20/24] sws: add a function for scaling dst slices

Michael Niedermayer michael at niedermayer.cc
Thu Jun 17 15:11:25 EEST 2021


On Fri, Jun 11, 2021 at 07:16:17PM +0200, Anton Khirnov wrote:
> Quoting Michael Niedermayer (2021-06-11 17:01:20)
> > On Thu, Jun 10, 2021 at 05:49:48PM +0200, Anton Khirnov wrote:
> > > Quoting Michael Niedermayer (2021-06-01 15:02:27)
> > > > On Mon, May 31, 2021 at 09:55:11AM +0200, Anton Khirnov wrote:
> > > > > Currently existing sws_scale() accepts as input a user-determined slice
> > > > > of input data and produces an indeterminate number of output lines.
> > > > 
> > > > swscale() should return the number of lines output
> > > > it does "return dstY - lastDstY;"
> > > 
> > > But you do not know the number of lines beforehand.
> > > I suppose one could assume that the line counts will always be the same
> > > for any run with the same parameters (strictly speaking this is not
> > > guaranteed) and store them after the first frame, but then the first
> > > scale call is not parallel. And it would be quite ugly.
> > > 
> > 
> > > > 
> > > > 
> > > > > Since the calling code does not know the amount of output, it cannot
> > > > > easily parallelize scaling by calling sws_scale() simultaneously on
> > > > > different parts of the frame.
> > > > > 
> > > > > Add a new function - sws_scale_dst_slice() - that accepts as input the
> > > > > entire input frame and produces a specified slice of the output. This
> > > > > function can be called simultaneously on different slices of the output
> > > > > frame (using different sws contexts) to implement slice threading.
> > > > 
> > > > an API that would allow starting before the whole frame is available
> > > > would have reduced latency and better cache locality. Maybe that can
> > > > be added later too but i wanted to mention it because the documentation
> > > > exlicitly says "entire input"
> > > 
> > > That would require some way of querying how much input is required for
> > > each line. I dot not feel sufficiently familiar with sws architecture to
> > > see an obvious way of implementing this. And then making use of this
> > > information would require a significantly more sophisticated way of
> > > dispatching work to threads.
> > 
> > hmm, isnt the filter calculated by initFilter() (for the vertical stuff)
> > basically listing the input/output relation ?
> > (with some special cases like cascaded_context maybe)
> > its a while since i worked on swscale so maybe iam forgetting something
> > 
> > Maybe that can be (easily) used ?
> 
> The logic in the loop over lines in swscale() is not exactly clear, but
> I guess I could figure that out by staring at it a bit longer. But the
> bigger question still is what to do with this information.
> 
> Submitting all the slices at once to execute() is simple and we already
> have infrastructure for that. Submitting slices dynamically as they
> become available would require significantly more work and I am not sure
> that the gains are worth it.

latency matters in some use cases
so a decoder which returns slices and could feed this to a scaler 
(when one is needed) as they become available would have a slight advantage

the use cases are stuff like realtime communication but also stuff like
online games. I do know someone used libavcodec and cared really alot about
every bit of latency because of this long ago.

Iam not saying we must fully implement this now, but i think the API
design should allow it to be added if its not implemented now.


> 
> > 
> > > 
> > > Or are you proposing some specific alternative way of implementing this?
> > > 
> > > > 
> > > > Also there are a few tables between the multiple SwsContext which are
> > > > identical, it would be ideal if they can be shared between threads
> > > > I guess such sharing would need to be implemented before the API is
> > > > stable otherwise adding it later would require application to be changed
> > > 
> > > In my tests, the differences are rather small. E.g. scaling
> > > 2500x3000->3000x3000 with 32 threads uses only ~15% more memory than
> > > with 1 thread.
> > > 
> > > And I do not see an obvious way to implement this that would be worth
> > > the extra complexity. Do you?
> > 
> > Well, dont we for every case of threading in the codebase
> > cleanly split the context in one thread local and one shared?
> 
> Certainly not for every case. E.g. frame threading in libavcodec spawns
> several (almost) independent decoders internally.

True, i was not thinking of this


> 
> > I certainly will not dispute that its work to do that. But we
> > did it in every case because its the "right thing" to do for a
> > clean implemtation. So i think we should aim toward that too here
> > But maybe iam missing something ?
> 
> Depends on how you define "clean" in this case. And a related question
> whether the threading should be inside swscale itself or not.
> 
> This patchset takes the route of adapting sws to allow external
> slice threading. This way callers can integrate it into their existing
> threading solutions, as I'm doing for vf_scale in lavfi.
> One could claim that this solution is cleaner in that the individual
> contexts are completely independent, so the callers are free to thread
> them in any way they like.
> 
> But you could also take the position that swscale should implement slice
> threading internally as a just-works black box. That would be
> - significantly more work
> - easier to use for people calling sws directly
> - more cumbersome to integrate into lavfi

Iam fine with threading outside or inside. But some user who has 300 lines
of 1080 available should be able to start doing something with that if he
wants. I mean from the API side at least even if the implementation would
force it to wait.

Also with full duplicated contexts there is configuration like setting
colorspace details and such that could in principle succeed on some
and fail on some. Its extra special cases for a user app to consider
because it leaves the set of contexts inconsistent its neither the
previous state nor the intended new state.
Maybe this never happens but still someone writing code would have to
think about if this can happen ...
To me it just feels nicer to have a context for the whole frame with all
the parameters about the frame and then seperatly contexts for the workers
the threading could be fully outside swscale with this too.


> 
> Beyond that, are you aware of any specific large constant objects that
> should be shared? I suppose it should be simple enough to make them
> refcounted and add a new SwsContext constructor that would take
> references to these objects.

i was not thinking of refcounting, more like a fixed worker count
and just some allocate all and kill all but surely allowing adding and
removing workers with refcounting could too be done.
really iam 90% happy with your patchset as it is, i just feel we may be
missing some opertunities here 

If it really is too hard for the gain to make these changes iam not
opposed to this patchset as it is either

Thanks

[...]

-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

When the tyrant has disposed of foreign enemies by conquest or treaty, and
there is nothing more to fear from them, then he is always stirring up
some war or other, in order that the people may require a leader. -- Plato
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20210617/6d46e45f/attachment.sig>


More information about the ffmpeg-devel mailing list