[FFmpeg-devel] Another nice RTP bug...

Luca Abeni lucabe72
Thu Aug 23 15:00:11 CEST 2007


Hi Michael,

Michael Niedermayer wrote:
[...]
>> I personally think that the correct thing to do would be "ntp_time = 
>> av_gettime()" - RFC 3550 says "NTP timestamp: Indicates the wallclock 
>> time (see Section 4) when this report was sent...". What do other people 
>> think about it?
> 
> there is a problem here pts are presentation timestamps that is the time
> at which the frame should be presented to the user so valid pts may be
> 
> 1 5 2 3 4 9 6 7 8 
> 
> for a stream with b frames (the frames get reordered and displayed in a
> order different from the order in which they get transmitted ...)
Yes... I do not know how RTP sets the timestamp in case of B frames (I 
never streamed video with B frames :). I suspect the DTS is used to fill 
the timestam field in the RTP packet.

In any case, this part of the code is about RTCP packets, for which we 
have no problems: an RTCP packet is just a "report" from the sender, 
that contains no data. In this case the RTCP packet is simply used to 
map "RTP TS time" (which has a timebase of 1/90000 and has a random 
offset respect to real time) to NTP time.

I suspect in theory we could fill the "NTP timestamp" and "RTP 
timestamp" fields with a random time value (the important thing is that 
the two fields represent the same time - and this is where 
rtcp_send_sr() is really buggy).

The standard suggests to fill the "NTP timestamp" field of an RTCP 
packet with the time when the packet is sent because this helps 
measuring statistics like round-trip-time, etc...

[...]
> and for what the timestamp is used, audio video sync? nothing? the precisse
> time at which the frame should be decoded, presented, something else?
> this needs to be clarified before we can decide if pts/dts/av_gettime()
> is correct
Well, this is my understanding of the thing:
1) RTP packets contain a 32bit timestamp (the RTP timestamp), which 
represents the instant when the first byte of the payload has been 
sampled (so, I think the PTS should be used... But as I said I never 
tested streaming video with B frames. I'll do that after the simple 
things are working :)
2) the RTP timestamps of two different media streams cannot be directly 
compared, because they are relative to different time references (they 
may advance at different rates, and can have different offsets respect 
to "real" time)
3) the streamer periodically emits RTCP SR packets that must contain a 
temporal reference t expressed both as an RTP timestamp and as an NTP 
time. In this way, a client can transform RTP timestamps in NTP times, 
and can synchronize the different media streams.

I hope I did not oversimplify things too much.


Now, the particular bug that I am seeing is due to the fact that 
libavformat is sometimes generating a "bad RTCP" packet, containing the 
RTP timestamp of the last sent RTP packet, and a random NTP time (not 
really random... It is AV_NOPTS_VALUE converted from RTP time to NTP 
time ;-).


			Thanks,
				Luca




More information about the ffmpeg-devel mailing list