[FFmpeg-devel] [Issue 664] [PATCH] Fix AAC PNS Scaling
Alex Converse
alex.converse
Tue Oct 7 03:52:51 CEST 2008
On Mon, Oct 6, 2008 at 9:39 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
> On Mon, Oct 06, 2008 at 08:52:06PM -0400, Alex Converse wrote:
>> On Mon, Oct 6, 2008 at 8:22 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>> > On Mon, Oct 06, 2008 at 03:46:55PM -0400, Alex Converse wrote:
>> >> On Tue, Sep 30, 2008 at 11:25 PM, Alex Converse <alex.converse at gmail.com> wrote:
>> >> > Hi,
>> >> >
>> >> > The attached patch should fix AAC PNS scaling [Issue 664]. It will not
>> >> > fix PNS conformance.
>> >>
>> >> Here's a sightly updated patch (sqrtf instead of sqrt). The current
>> >> method of PNS will never conform because sample energy simpl doesn't
>> >> converge to it's mean fast enough. The spec explicitly states that PNS
>> >> should be normalized per band. Not doing it that way causes PNS-1
>> >> conformance to fail for 45 bands.
>> >
>> > elaborate, what part of the spec says what?
>>
>> 14496-3:2005/4.6.13.3 p184 (636 of the PDF)
>>
>> > what is PNS-1 conformance ?
>>
>> 14496-4:2004/6.6.1.2.2.4 p94 (102 PDF)
>> 14496-5/conf_pns folder
>
> do you happen to have URLs for these?
>
>
>>
>> > the part that feels a little odd is normalizing random data on arbitrary
>> > and artificial bands, this simply makes things non random.
>> > This would be most extreemly vissibly with really short bands of 1 or 2
>> > coeffs ...
>> > another way to see the issue is to take 100 coeffs and split them into
>> > 10 bands, if you now normalize litterally these 10 bands then the 100
>> > coeffs will no longer be random at all, they will be significantly
>> > correlated. This may be inaudible, it may or may not sound better and
>> > may or may not be what the spec wants but still it feels somewhat wrong
>> > to me ...
>> >
>>
>> Ralph Sperschneider from FhG/MPEG spelled it all out:
>> http://lists.mpegif.org/pipermail/mp4-tech/2003-June/002358.html
>>
>> I'm not saying it's a smart way to design a CODEC but it's what MPEG picked.
>
> yes, so i guess the most sensible solution would be to precalculate
> a second of noise normalized to the band sizes and randomly pick from
> these.
>
That sounds messy and overly complex. What's wrong with doing it the
way MPEG tells us to? Or just sticking with what we have it sounds
fine and is fast.
>
>>
>> >
>> >>
>> >> However with this patch there appears to be no audible difference
>> >> between the approaches.
>> >
>> >> I don't know the ideal mean energy so I'm
>> >> using the sample mean energy for 1024 iterations of the LCG.
>> >
>> > i assume cpu cycles got more expensive if people can only spare a few
>> > thousand
>> >
>>
>> How many do you propose then? I tried running it over the whole period
>> and the result seemed low, I think it's a classic case of adding too
>> many equal size floating point values.
>
> real mathematicans tend not to use floats that are bound to rounding errors
>
> try:
> for(i=min; i<=max; i++){
> uint64_t a= i*i;
> var += a;
> if(var < a){
> var2++;
> }
> }
>
That will only hold 5 or 6 big values 2^64/((2^31)^2) = 4.
--Alex
More information about the ffmpeg-devel
mailing list