[FFmpeg-devel] [PATCH 1/6] Frame-based multithreading framework using pthreads

Alexander Strange astrange
Mon Nov 15 19:30:56 CET 2010


On Nov 15, 2010, at 12:56 PM, Alexander Strange wrote:

> 
> On Nov 15, 2010, at 12:20 PM, Reimar D?ffinger wrote:
> 
>> I'd still prefer for the threading being hidden from the application,
>> in particular no new requirements on get/release_buffer, but
>> I fear I'm not going to win that argument.
> 
> This can be done but I think it would seriously compromise speed (from the client's point of view).
> libavcodec would have to block the client thread by waiting for the codec to message back to it asking
> for the buffer callback to be called. That's clock time that could be spent in the client code doing something useful.
> 
> The problem here is just that I used "thread-safe", which doesn't mean anything, instead of the real requirement,
> which is that it can't use thread-local variables. Nothing to do with reentrancy or locks.

Sorry, uau pointed out I'm wrong here. The thread can call the client's get_buffer() while the main thread is running client code, and obviously then it can do something unsafe.

The extra-safe option is to make frame threading opt-in, so committing it will never break anyone's client unless they turn it on. I don't mind that, but clients not paying attention might forget to add the code.

Another option would be to block the client thread until it's done with buffer operations, but allow clients to declare their callbacks completely thread-safe and get out of that for some more speed. Either way they'd still have to avoid thread-local data, but I think that's not a problem.

And we should document that the interface used is always pthreads, so they can compile with the same wrapper library if needed.

...but note that native w32threads can and already do call back into draw_horiz_band(), so that one's still going to be different.

> I am certain that no client does this anyway, except for a one-line patch to add an NSAutoreleasePool to mplayer's OS X vo which I'll send soon.
> 
>>> +* There is one frame of delay added for every thread beyond the first one.
>>> +  Clients using dts must account for the delay; pts sent through reordered_opaque
>>> +  will work as usual.
>> 
>> Is there a way to query this? I mean the application
>> knows how many threads it set, but that might not always
>> be the same number as FFmpeg uses or so?
> 
> It always uses the number of threads set. If this changes - it might, because frame-decoding threads should be able to use execute() too, and currently they don't - we'll have to maintain that invariant or introduce a better way of tracking dts. Both of which we should do anyway.




More information about the ffmpeg-devel mailing list