[FFmpeg-devel] [PATCH 2/2] checkasm: sw_scale: Reduce range of test data in the yuv2yuvX test to get closer to real data

Martin Storsjö martin at martin.st
Thu Aug 18 10:22:53 EEST 2022


On Wed, 17 Aug 2022, Ronald S. Bultje wrote:

> On Wed, Aug 17, 2022 at 4:32 PM Martin Storsjö <martin at martin.st> wrote:
>       This avoids overflows on some inputs in the x86 case, where the
>       assembly version would clip/overflow differently from the
>       C reference function.
>
>       This doesn't seem to be a real issue with actual input data, but
>       only with the previous fully random input data.
> 
> 
> I'm a bit scared of this change... If we can trigger overflows with specific
> pixel patterns, doesn't that make FFmpeg input-data exploitable? Think of
> how that would go with corporate users with user-provided input data.

No, this most probably isn't a real issue with actual filters - it's only 
that the current checkasm test was overly simplistic.

The input to this DSP function isn't raw user-provided input pixels, but 
16 bit integers produced as the output of the first (horizontal) scaling 
step. Yesterday when I wrote the patch, I hadn't checked exactly what the 
range of those values were, and I assumed it wasn't the whole int16_t 
range - but apparently they can range at least up to max int16_t. (They 
most probably can't range down to the minimum negative int16_t though - 
simulating that aspect would be nice too.)

The filter coefficients should add up to 4096. The input sample range is 
15 bits (plus sign), and filter coefficients add a total magnitude of 12 
bits, giving a total range of 27 bits (plus sign). After shifting down by 
19 bits at the end, this produces 8 bits output (which is clipped).

The critical stage here is the 27 bits, where there's still plenty of 
headroom (4 bits) for filter overshoot - with a real filter.

However in the current test, the filter coefficients are just plain random 
int16_t values in the whole range - and that can easily cause overflows in 
the intermediates.

So I guess we shouldn't scale down the input "pixels" here as they 
actually can use the whole range up to max int16_t (but ideally, they 
wouldn't range further down below zero than what you'd get from the 
maximal negative filter overshoot either), but we should scale down the 
fully random filter coefficients, so that they can't overflow even if they 
all happen to align in the worst way.

Alternatively we could construct a more realistic test filter, e.g. 
something like what's used in the hscale test. There, if the filter should 
add up to 1<<F, and we have N filter coefficients, we have all of them but 
one be set to -((1<<F)/(N-1)) and one set to ((1<<(F+1)) - 1). It doesn't 
look much like a real actual filter, but keeps most properties - it adds 
up to the right sum and doesn't trigger unreal overflows and it produces 
both positive and negative values.

Anyway, I'll update the patch and make a clearer comment for it.

// Martin


More information about the ffmpeg-devel mailing list