[FFmpeg-devel] Allow interrupt callback for AVCodecContext

Don Moir donmoir at comcast.net
Mon Jan 6 09:34:42 CET 2014


----- Original Message ----- 
From: "Reimar Döffinger" <Reimar.Doeffinger at gmx.de>
To: "FFmpeg development discussions and patches" <ffmpeg-devel at ffmpeg.org>
Sent: Monday, January 06, 2014 5:17 AM
Subject: Re: [FFmpeg-devel] Allow interrupt callback for AVCodecContext


> On 06.01.2014, at 10:01, "Don Moir" <donmoir at comcast.net> wrote:
>>> In other words: I think this looks so attractive to you because it would work
>>> well if it was implemented specifically _just for you_. But having code specifically
>>> for one person in such a large project doesn't make much sense.
>>
>> I think its a real issue and I know better than to ask for something just for me.
>> Although the people that would benefit would be in the minority.
>> Players apps would not benefit much. Timeline and editiing apps can benefit though.
>
> Your examples make me supicious. Why would timeline and editing apps have to
> completly unpredictably stop decoding? Player apps should be far more affected...
> If it's predictable, you can just flush the context...

It's predictable in the sense that user has choosen to seek or swap out video. In my apps the seek call is immediate and 
interruptible but the actual seeking process takes time. Some of this time is waiting on avcodec_decode_video2 and then just depends 
on seek position etc.

I can flush the context while it is in the middle of an avcodec_decode_video2? This is the thing I am waiting to finish when not 
using cached context.

Media items in a timeline app need to stick to a strick timeline. This is not exactly possible when abitrary seeking is being 
performed. But it is possible when seeking to zero of a media item if you do not need to wait on decoding. This case happens for us 
more offten than not in that user is seeking around and then retarts from beginning.

It always will be that I have to time correct this situaltion but the more I can reduce the time the better off I am.

Both timeline apps and editing apps benefit when scrubhing to get instant reponse. If in the mist of decoding more than likely it 
has to wait. I deal with end users that actually like scrubbing the video during a live performance.

With a player app, user may be seeking around some but you don't really care if it he has to wait abit. This wait is not really 
noticible for this type of end user. His seek position is updated immediately and if he has to wait slightly then not a big deal.

>>> Also, if these additional resources are relevant the better solution would be to
>>> reduce resource usage of FFmpeg,  that is something everyone would benefit from.
>>
>> Yes. I don't think there is a way to partially shut down an open context so it
>> can come back up quickly. You have to close it to save on resources but then re-open
>> takes time again. I have to be as efficient as possible as I have a lot going on and I do
>> what I can and hope ffmpeg is same. I keep memory and threads to a minimal so maybe
>> you can see why it bothers me to add another cached open context allocating memory and threads.
>
> Without analysis what exactly the cost is, that is micro-optimization.
> There are going to be loads of places where you can micro-optimize that doesn't need new
> FFmpeg features and might gain your application more im overall performance.
> If you did an analysis and the cost was relevant, then we should look at that and
> see if we can fix it so that cached contexts are nothing anyone has to worry about.

It's not clear how much more memory is used when allocating new context. It depends on the codec id mostly I think. I am not too 
much bothered by that.

Each open context though does in my worse case create 2 new threads that are just sitting idle waiting to be swapped out when 
needed. Right now I have just one cached context for video. Audio does not matter too much but I just have not determined if I need 
to do anything for that or not yet. I limit the context to have a maximum of 2 threads. Diminishing returns with more threads than 
that.

So if I have 10 videos open at once (not unusual), than that is 20 additional threads open doing nothing. I know they are idle and 
not doing anything but it still bothers me.

> The other option is to make creating contexts really fast, so you
> don't have to cache them in the first place.

It's not the creation buts it's the open that takes the most time. If avcodec_open2 could be made blinding fast then that would be 
good.

The only way that works across the board right now for new context is the following.

AVCodecContext *new_context = avvode_alloc_context3 (NULL);
avcodec_copy_context (new_context,exisiting_context);
avcodec_open2 (new_context,codec,NULL);

You can use this approach for some but fails too often in avcodec_open2 or elsewhere.

AVCodecContext *new_context = avvode_alloc_context3 (codec);
avcodec_open2 (new_context,codec,NULL);






More information about the ffmpeg-devel mailing list