[FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests for cases with extra intermediate precision

Guo, Yejun yejun.guo at intel.com
Fri Apr 24 14:54:17 EEST 2020



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Martin
> Storsj?
> Sent: Friday, April 24, 2020 6:44 PM
> To: FFmpeg development discussions and patches <ffmpeg-devel at ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests for
> cases with extra intermediate precision
> 
> On Thu, 23 Apr 2020, Guo, Yejun wrote:
> 
> >> -----Original Message-----
> >> From: ffmpeg-devel [mailto:ffmpeg-devel-bounces at ffmpeg.org] On Behalf
> >> Of Martin Storsj?
> >> Sent: Thursday, April 23, 2020 2:34 PM
> >> To: ffmpeg-devel at ffmpeg.org
> >> Subject: [FFmpeg-devel] [PATCH] dnn-layer-mathbinary-test: Fix tests
> >> for cases with extra intermediate precision
> >>
> >> This fixes tests on 32 bit x86 mingw with clang, which uses x87 fpu
> >> by default.
> >>
> >> In this setup, while the get_expected function is declared to return
> >> float, the compiler is (especially given the optimization flags set)
> >> free to keep the intermediate values (in this case, the return value
> >> from the inlined function) in higher precision.
> >>
> >> This results in the situation where 7.28 (which actually, as a float,
> >> ends up as 7.2800002098), multiplied by 100, is
> >> 728.000000 when really forced into a 32 bit float, but 728.000021
> >> when kept with higher intermediate precision.
> >>
> >> For the multiplication case, a more suitable epsilon would e.g.
> >> be 2*FLT_EPSILON*fabs(expected_output),
> >
> > thanks for the fix. LGTM.
> >
> > Just want to have a talk with 2*FLT_EPSILON*fabs(expected_output),
> > any explanation for this? looks ULP (units of least precision) based
> > method is a good choice, see https://bitbashing.io/comparing-floats.html.
> > Anyway, let's use the hardcoded threshold for simplicity.
> 
> FLT_EPSILON corresponds to 1 ULP when the exponent is zero, i.e. in the range
> [1,2] or [-2,-1]. So by doing FLT_EPSILON*fabs(expected_output) you get the
> magnitude of 1 ULP for the value expected_output. By allowing a difference of 2
> ULP it would be a bit more lenient - not sure if that aspect really is relevant or
> not.
> 
> This would work fine for this particular test, as you have two input values that
> should be represented the same in both implementations, and you do one
> single operation on them - so the only difference _should_ be how much the
> end result is rounded. If testing for likeness on a more complex function that
> does a series of operations, you would have to account for a ~1 ULP rounding
> error in each of the steps, and calculate how that rounding error could be
> magnified by later operations.
> 
> And especially if you have two potentially inexact numbers that are close each
> other and perform a subtraction, you'll have loss of significance, and the error
> in that result is way larger than 1 ULP for that particular number.

Thanks a lot Martin, will push now.

> 
> // Martin
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-request at ffmpeg.org
> with subject "unsubscribe".


More information about the ffmpeg-devel mailing list