[FFmpeg-devel] [Jack-Devel] [PATCH] libavdevice: JACK demuxer

Fons Adriaensen fons
Thu Mar 5 13:15:21 CET 2009


On Thu, Mar 05, 2009 at 03:02:14AM +0100, Michael Niedermayer wrote:

> On Wed, Mar 04, 2009 at 09:31:44PM +0100, Fons Adriaensen wrote:
>
> > > well, the filter will take the first system time it gets as its
> > > best estimate
> > 
> > there's no alternative at that time
> 
> this is not strictly true, though it certainly is not what i meant but
> there very well can be a systematic error so that the first time + 5ms
> might be better as a hyphothetical example.

Might be, or not. With just one value you don't know.

If the sequence of measured times is t_i, then t_1 - t_0 
*could* give an first estimate of the true sample rate.
Most sound cards (even the cheapest ones) are within
0.1% of the nominal value. Wich means the random 
error on t_1 - t_0 (the jitter) will be much larger
than the systematic one, and the sample rate computed
from the first two value is useless. This will apply
to any trick you may want to use to get a 'quick' 
value for the sample rate error.

> > > and then "add" future times slowly into this to
> > > compensate. That is it weights the first sample
> > > very differently than the following ones, this is
> > > clearly not optimal.
> > 
> > What makes you think that ?
> 
> common sense

Which is usually defeated by exact analysis. There is
nothing mysterious about this, it's control loop /
filter theory that's at least 50 years old now.

> > > or in other words the noisyness or call it accuracy of its internal state
> > > will be very poor after the first sampele while after a hundread it will
> > > be better.
> > 
> > Which is what matters. 
> 
> yes but it would converge quicker where the factor not fixed

As long as the system is linear and time-invariant (you don't
modify the parameters on the fly), then for any given bandwidth
the quickest convergence will be with critical damping. That is 
again something any engineering student will learn in the first
few months. It's basic maths, nothing else. If your common 
sense tells you otherwise then your common sense is wrong.

You can make it converge quicker by using a higher bandwidth
initially, or by using non-linear filter techniques wich will
be a lot more complicated, or sometimes by ad-hoc tricks.
**But all of this is useless**. As long as the filter settles
within a few seconds all is OK. 
 
> > > The filter though will add samples in IIR fashion while ignoring
> > > this
> > 
> > It's called exponential averaging, which means recent
> > samples have more weight than older ones. Without that
> > a system can't be adaptive.
> 
> that isnt true, one can design a non exponential filter that is
> adaptive as well.

I did not say it has to exponential. I said it has to give
higher weigth to more recent data. 


> heres a simple example with the recently posted timefilter patch
> code below will simulate random uncorrelated jitter and a samplerate
> error, it will find the best values for both parameters using a really
> lame search.

Since you error statistics include the values during
the initial settling time - which are completely
irrelevant in this case - it is invalid. The only
thing that matters is the long term performance. 

I'm not going to waste anymore time on this, unless
you can at least show you understand the basic theory
and we have a common ground. 

Ciao,

-- 
FA

Laboratorio di Acustica ed Elettroacustica
Parma, Italia

O tu, che porte, correndo si ?
E guerra e morte !




More information about the ffmpeg-devel mailing list