
Hi On Mon, Feb 11, 2008 at 04:19:32PM +0100, Michael Niedermayer wrote: [...]
Knowing also the magnitude of the error allows much more rapid convergence.
I am not so sure about this. I mean i dont dispute that more information should improve it, but i think its good enough with the too much/too little.
A simple example, lets assume we have a decoder with a clock that drifts by up to D between syncpoints. That is, in the most ideal case we would have to accept that we are D off when we reach a syncpoint, assuming we synced to the previous perfectly.
Now lets assume that we are within -2D .. +2D at syncpoint x, and we apply a +D correction if we are <0 and -D if we are >0. This correction could be applied slowly until the next syncpoint. What matters is that after the correction we are within -D .. +D and with the drift thats again -2D .. +2D at syncpoint x+1. Thus above is a proof by induction that just knowing the sign and the worst case clock drift is sufficient to be within a factor of 2 of the best achiveable clock sync. (comparing worst cases, not average)
This is not how clock sync is usually done. A typical implementation involves a PLL-type construct to make the local clock accurately track the sender clock. Once locked, there is very little drift. To correctly compensate for what little drift inevitably remains, the size of the error must be known.
Could you elaborate on how PLL based clock sync with transmit ts works? I am no PLL expert, what i know is more of the sort that a PLL takes a reference signal like a sine wave as input, not occasional scalars which represent time since some point 0. Iam also fine with a RTFM + an url.
The time difference can of course be computed from the difference in buffer fullness and the received bitrate, it merely takes a little more work on the receiver side.
instead of transmit_ts one can use internal_clock_ts + (buffer_fullness < real_fullness ? D : -D) That should provide a pretty good reference for the PLL IMHO
Also what i forgot to say was that a correction by +D/-D is much more robust than correcting by transmit_ts - internal_clock_ts. Its not a big issue with real broadcast and checksum protected transmit_ts. But if either the transmit_ts is unprotected or there is a significant randomness in the latency like UDP/TCP over the internet. Then limiting the correction to the worst case clock drift should work much better than compensating for the whole apparent drift. So at least clip(transmit_ts, internal_clock_ts - D, internal_clock_ts + D) should be used in practice and and not transmit_ts as such. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB It is dangerous to be right in matters on which the established authorities are wrong. -- Voltaire