[FFmpeg-devel] [PATCH] Make binkaudio work with ff_float_to_int16_interleave_c

Alex Converse alex.converse
Wed Mar 10 18:40:41 CET 2010


On Wed, Mar 10, 2010 at 12:06 PM, Ronald S. Bultje <rsbultje at gmail.com> wrote:
> Hi,
>
> On Wed, Mar 10, 2010 at 5:46 AM, Martin Storsj? <martin at martin.st> wrote:
>> The bink audio decoder produces distorted output if the
>> float_to_int16_interleave function happens to be implemented by
>> ff_float_to_int16_interleave_c. All other audio decoders using this
>> function have special casing for the case when float_to_int16_interleave
>> is implemented by ff_float_to_int16_interleave_c, adding a particular
>> bias and scale factor.
>
> Independent of the patch (I'm not maintainer), and maybe this is just
> me, but why is this the case? This just smells like BBBBUUUUUUGGGGGGG
> to me. Does the 1-cycle gain that you got through this really justify
> the real problems that quite apparently result from it?
>

I agree with Ronald.

ff_float_to_int16_interleave_c uses some 754 hacks to pull int16 out
of a float. most other implementations of this use special native
vectorized instructions.

I'd like to see more audio decoders outputting in their native sample
format and getting converted to the target format with a better
audioconvert.c, particularly when the coders have optional postfilters
or bandwidth extensions.

Floating point audio maintainers at least need a better way of being
able to test this case. Too often bugs creep in and don't get noticed
until somebody with old hardware complains.



More information about the ffmpeg-devel mailing list