[FFmpeg-devel] [PATCH] DCA floating point output
Mon Apr 26 07:27:50 CEST 2010
2010/4/26 M?ns Rullg?rd <mans at mansr.com>
> Trust me, it will. It will spend 5x more time in the conversion than
> in the decoder itself. I've seen it.
I am not sure about how efficient the DCA decoding part works. At least when
downmixing is involved, I see a lot of floating point multiplications. Take
DCA_3F2R to DCA_STEREO downmixing for example, it's done as:
MIX_REAR2(samples, i + 768, i + 1024, 3, coef));
And MIXFRONT3 and MIX_REAR2 is defined as:
#define MIX_REAR2(samples, si1, si2, rs, coef) \
samples[i] += samples[si1] * coef[rs] + samples[si2] *
samples[i+256] += samples[si1] * coef[rs] + samples[si2] *
#define MIX_FRONT3(samples, coef) \
t = samples[i]; \
samples[i] = t * coef + samples[i+256] * coef +
samples[i+512] * coef; \
samples[i+256] = t * coef + samples[i+256] * coef +
samples[i+512] * coef;
So you see, for each sample, there will be 5 multiplications. I guess adding
another floating point division at output won't hurt that much.
I assume downmixing 5.1 channel to stereo is the scenario most people used
when it comes to DTS, especial for those who is still on old hardware.
More information about the ffmpeg-devel