[NUT-devel] Another suggestion for broadcast [PATCH]

Måns Rullgård mans at mansr.com
Tue Feb 12 00:04:17 CET 2008


Michael Niedermayer <michaelni at gmx.at> writes:

> On Mon, Feb 11, 2008 at 12:14:20PM -0000, Måns Rullgård wrote:
>> 
>> Michael Niedermayer wrote:
>> > On Thu, Feb 07, 2008 at 01:58:32PM -0000, Måns Rullgård wrote:
>> >>
>> >> Michael Niedermayer wrote:
>> > [...]
>> >> > That way the buffer_fullness stored in the syncpoint will always match
>> >> > exactly the amount the decoder has in its buffer when reading the
>> >> > syncpoint. If it has more in its buffer it just would change its clock
>> >> > to one running at 100.1% and when it has less in its buffer it would
>> >> > choose a 99.9% clock as reference. (Or any approximetaly equivalent
>> >> > process)
>> >>
>> >> That the buffer fullness is off by N bits doesn't tell you how much too
>> >> fast or too slow your clock is, only the sign of the error.
>> >
>> > yes
>> >
>> >
>> >> Knowing
>> >> also the magnitude of the error allows much more rapid convergence.
>> >
>> > I am not so sure about this. I mean i dont dispute that more
>> > information should improve it, but i think its good enough with
>> > the too much/too little.
>> >
>> > A simple example, lets assume we have a decoder with a clock that drifts
>> > by up to D between syncpoints.
>> > That is, in the most ideal case we would have to accept that we are
>> > D off when we reach a syncpoint, assuming we synced to the previous
>> > perfectly.
>> >
>> > Now lets assume that we are within -2D .. +2D at syncpoint x, and
>> > we apply a +D correction if we are <0 and -D if we are >0. This
>> > correction could be applied slowly until the next syncpoint. What
>> > matters is that after the correction we are within -D .. +D and
>> > with the drift thats again -2D .. +2D at syncpoint x+1.
>> > Thus above is a proof by induction that just knowing the sign and
>> > the worst case clock drift is sufficient to be within a factor of
>> > 2 of the best achiveable clock sync. (comparing worst cases, not
>> > average)
>> 
>> This is not how clock sync is usually done.  A typical implementation
>> involves a PLL-type construct to make the local clock accurately track
>> the sender clock.  Once locked, there is very little drift.  To correctly
>> compensate for what little drift inevitably remains, the size of the
>> error must be known.
>
> Could you elaborate on how PLL based clock sync with transmit ts works?
> I am no PLL expert, what i know is more of the sort that a PLL takes a
> reference signal like a sine wave as input, not occasional scalars which
> represent time since some point 0.
> Iam also fine with a RTFM + an url.

I recommend ISO 13818-1/ITU-T 222.0 Annex D.  It can be downloaded for
free from http://www.itu.int/rec/T-REC-H.222.0-200605-I/en

>> The time difference can of course be computed from the difference in buffer
>> fullness and the received bitrate, it merely takes a little more work on
>> the receiver side.
>
> instead of transmit_ts one can use
> internal_clock_ts + (buffer_fullness < real_fullness ? D : -D)
> That should provide a pretty good reference for the PLL IMHO

This could easily end up oscillating around the correct rate, where
using the precise error value would give a near-perfect match.

>> >> Providing the timestamp in the stream makes this trivial and independent
>> >> of the buffering mechanism actually used.  Only specifying expected
>> >> buffer fullness (according to a reference model) requires that the
>> >> receiver at the very least simulate the reference model,
>> >
>> > I think most receivers will use something quite similar to the reference
>> > model thus making this unneeded. Though yes a receiver using a different
>> > buffer model might need to simulate the reference one.
>> 
>> I think it very unlikely that any real implementation will use whatever
>> precise buffer model we choose.  Just about any implementation is
>> likely to immediately extract the elementary streams of interest, and
>> discard everything else, such as container headers and unwanted elementary
>> streams.
>
> Well we can discard the container headers as well in the reference model.

We could, but I don't quite like the idea.  The difference in
implementation effort is fairly small, and I prefer to keep the spec
as simple as possible.

>> > But i have difficulty imageing a sufficiently different buffer model.
>> > I mean a receiver with split buffers could just be taking the sum of
>> > their buffer_fullness.
>> > A reciver which removes packets later or not instantaneously would just
>> > traverse the last few packets to find out how much the refernce buffer
>> > would contain.
>> 
>> I'm not saying it would very difficult to simulate the reference buffer,
>> but something is always more than nothing.
>
> yes but OTOH transmit_ts would not scale nicely with increased bandwidth.
> Do you have some suggestions here to avoid this disadvantage?

Could you elaborate on this issue?

-- 
Måns Rullgård
mans at mansr.com



More information about the NUT-devel mailing list