[FFmpeg-devel] [PATCH] pthread_frame: attempt to get frame to reduce latency
Dai, Jianhui J
jianhui.j.dai at intel.com
Thu Mar 12 04:00:58 EET 2020
Thanks for clarify the packets-to-frames delay. I will respect that decoder design principle.
Add a few backgrounds for this issue.
I work on 4K at 30fps streaming playback on 16-cores host.
Initially I use FF_THREAD_SLICE, but "avcodec_send_packet" takes ~100ms, only 10fps playback (I want 30fps).
It is not good for UHD video decoding, and not fully utilizing modern multiple CPU core.
Then I switch to FF_THREAD_FRAME (slices parallel disabled), the fps is able to achieve 30fps, but latency also increasing to ~500ms.
Ideally, I wish to have FF_THREAD_FRAME+ FF_THREAD_SLICE simultaneously:)
Thanks,
Jianhui Dai
-----Original Message-----
From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Derek Buitenhuis
Sent: Thursday, March 12, 2020 6:44 AM
To: ffmpeg-devel at ffmpeg.org
Subject: Re: [FFmpeg-devel] [PATCH] pthread_frame: attempt to get frame to reduce latency
On 11/03/2020 20:42, Martin Storsjö wrote:
> FWIW, while I agree it shouldn't be the default, I have occasionally
> considered the need for this particular feature.
Arguably slice threading should be used, but that assumes you have sane input, which is obviously not always the case for some.
> Consider a live stream with a very variable framerate, essentially
> varying in the full range between 0 and 60 fps. To cope with decoding
> the high end of the framerate range, one needs to have frame threading
> enabled - maybe not with something crazy like 16 threads, but say at least 5 or so.
>
> Then you need to feed 5 packets into the decoder before you get the
> first frame output (for a stream without any reordering).
That last bit is key there, but yes.
>
> Now if packets are received at 60 fps, you get one new packet to feed
> the decoder per 16 ms, and you get the first frame to output 83 ms
> later, assuming that the decoding of that individual frame on that
> thread took less than 83 ms
(I'm assuming network, etc. has been left out for example's sake. :))
> However, if the rate of input packets drops to e.g. 1 packet per
> second, it will take 5 seconds before I have 5 packets to feed to the
> decoder, before I have the first frame output, even though it actually
> was finished decoding in say less than 100 ms after the first input
> packet was given to the decoder.
>
> So in such a setup, being able to fetch output frames from the decoder
> sooner would be very useful - giving a closer to fixed decoding time
> in wallclock time, regardless of the packet input rate.
Not sure I would refer to it as closer to fixed, but the use case is certainly valid - I never claimed otherwise.
If it is added, it needs be behind a flag/option with big bold letters saying the risks, and off by default. And that segfault Michael saw investigated.
Thanks for the clear response that doesn't conflate the two.
- Derek
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel at ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email ffmpeg-devel-request at ffmpeg.org with subject "unsubscribe".
More information about the ffmpeg-devel
mailing list