[FFmpeg-devel] [PATCH 1/6] Frame-based multithreading framework using pthreads
Alexander Strange
astrange
Mon Nov 15 19:22:57 CET 2010
On Nov 15, 2010, at 12:37 PM, Ronald S. Bultje wrote:
> Hi,
>
> On Mon, Nov 15, 2010 at 5:20 PM, Reimar D?ffinger
> <Reimar.Doeffinger at gmx.de> wrote:
>> On Mon, Nov 15, 2010 at 08:37:01AM -0500, Alexander Strange wrote:
>>> +* There is one frame of delay added for every thread beyond the first one.
>>> + Clients using dts must account for the delay; pts sent through reordered_opaque
>>> + will work as usual.
>>
>> Is there a way to query this? I mean the application
>> knows how many threads it set, but that might not always
>> be the same number as FFmpeg uses or so?
>
> This is poorly designed anyway (imho). While at it, why not start
> using the AVFrame.pts/dts fields as is done for encoding (and a compat
> wrapper for those poor souls using reordered_opaque).
Nitpick #1:
I don't like the way encoding works. avcodec_encode_video writes into a raw buffer, so you have to look at fields inside AVCodecContext.coded_frame, which is IMO ugly.
Adding an avcodec_encode_video2() returning an AVPacket would be nice, but I haven't found a reason to care about this part of encoding recently so I didn't look.
Nitpick #2:
This only affects dts. AVFrame doesn't have a dts field, because dts isn't supposed to mean anything for decoded pictures.
I definitely agree we should be handling AVPacket.pts/dts stuff for players, but only one timestamp should come out of lavc, and it should just be called "timestamp".
Steps towards this:
- track time spent delayed in lavc (dts of the packet input when the first frame was returned - dts of the first packet), somehow don't count the "official" delay assumed by the stream, store it in AVCodecContext
- use that in guess_correct_pts() and move that code inside avcodec_decode_video somewhere
- make up fake timestamps based on the stream FPS if we end up with AV_NOPTS_VALUE or non-monotime timestamps. Of course, for VFR material we can't really do anything sane here.
That first one is enough to make up for the (very predictable but obviously still existing) dts shift caused by threading. But if we leave it at that, it means clients have to copy-paste our timestamp correction code, which I don't like.
More information about the ffmpeg-devel
mailing list