[NUT-devel] Another suggestion for broadcast [PATCH]

Michael Niedermayer michaelni at gmx.at
Mon Feb 11 02:19:29 CET 2008


On Thu, Feb 07, 2008 at 01:58:32PM -0000, Måns Rullgård wrote:
> 
> Michael Niedermayer wrote:
[...]
> > That way the buffer_fullness stored in the syncpoint will always match
> > exactly the amount the decoder has in its buffer when reading the
> > syncpoint. If it has more in its buffer it just would change its clock
> > to one running at 100.1% and when it has less in its buffer it would
> > choose a 99.9% clock as reference. (Or any approximetaly equivalent
> > process)
> 
> That the buffer fullness is off by N bits doesn't tell you how much too
> fast or too slow your clock is, only the sign of the error.  

yes


> Knowing
> also the magnitude of the error allows much more rapid convergence.

I am not so sure about this. I mean i dont dispute that more information
should improve it, but i think its good enough with the too much/too little.

A simple example, lets assume we have a decoder with a clock that drifts
by up to D between syncpoints.
That is, in the most ideal case we would have to accept that we are
D off when we reach a syncpoint, assuming we synced to the previous
perfectly.

Now lets assume that we are within -2D .. +2D at syncpoint x, and we apply
a +D correction if we are <0 and -D if we are >0. This correction could be
applied slowly until the next syncpoint. What matters is that after the
correction we are within -D .. +D and with the drift thats again -2D .. +2D
at syncpoint x+1.
Thus above is a proof by induction that just knowing the sign and the worst
case clock drift is sufficient to be within a factor of 2 of the best
achiveable clock sync. (comparing worst cases, not average)


> Providing the timestamp in the stream makes this trivial and independent
> of the buffering mechanism actually used.  Only specifying expected
> buffer fullness (according to a reference model) requires that the
> receiver at the very least simulate the reference model, 

I think most receivers will use something quite similar to the reference
model thus making this unneeded. Though yes a receiver using a different
buffer model might need to simulate the reference one.

But i have difficulty imageing a sufficiently different buffer model.
I mean a receiver with split buffers could just be taking the sum of
their buffer_fullness.
A reciver which removes packets later or not instantaneously would just
traverse the last few packets to find out how much the refernce buffer
would contain.

And if all else fails, something like the following should do:

buffer_fullness += in_bytes;
buffer= realloc(buffer, sizeof(packet)*(buffer_index+1));
buffer[buffer_index].dts = in_dts;
buffer[buffer_index].size= in_bytes;
for(i=0; i<buffer_index; i++){
    if(buffer[i].dts > current_time)
        break;
    buffer_fullness -= buffer[i].size;
}
buffer_index++;
memmove(buffer, buffer+i, sizeof(packet)*(buffer_index-i));
buffer_index -= i;

Anyway if you say a timestamp would be better, i suspect rich would be harder
to convince than me.

[...]

-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Why not whip the teacher when the pupil misbehaves? -- Diogenes of Sinope
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/nut-devel/attachments/20080211/aec7f3d0/attachment.pgp>


More information about the NUT-devel mailing list