[FFmpeg-devel] [PATCH] Support for reducing from 5.1 and 4.0 audio to stereo audio
Rich Felker
dalias
Wed Oct 31 17:26:37 CET 2007
On Wed, Oct 31, 2007 at 04:38:57PM +0100, Axel Holzinger wrote:
> > > +static void quad_to_stereo(short *output, short *input, int n)
> > > +{
> > > + int i;
> > > + for(i=0; i<n; i++) {
> > > + output[0] = (input[0] + input[2]) >> 1;
> > > + output[1] = (input[1] + input[3]) >> 1;
> > >
> >
> > shouldn't it be /2 instead of >>1 ?
>
> It's ints, what is the difference?
The difference is what happens to negative values. However, I think
rounding in the consistent direction (towards -?) is better than
rounding towards 0. The latter could create artifacts; the former will
only create DC bias which is inaudible.
> > > + output += 2;
> > > + input += 4;
> > > + }
> > > +}
> > > +
> > > +
> > > +static void ac3_5p1_to_stereo(short *output, short *input, int n1)
>
> The correct way to handle this is to follow the ITU matrix:
>
> Lo = L + 0.7 * C + k * Ls
> Ro = R + 0.7 * C + k * Rs with default k = 0.7
>
> You see: No division! Instead you have to implement clipping.
>
> Why no division?
>
> Imagine, you have a 5.1 signal, but only L + R hold any signal. If you
> divide, the stereo level will be decreased and you will loose dynamic.
This is correct, though. Channel reduction is _supposed_ to be a lossy
operation. A stream with content only in the L/R channels is "less
loud" than a stream with content on all 6 channels (when played on a
real 5.1 system), and thus it should come across less loud when
downmixed. If you really have 5.1 content where all but 2 of the
channels are empty, you should be using a channel-dropping filter
rather than downmixing anyway, but only an idiot would produce such
content to begin with..
Rich
More information about the ffmpeg-devel
mailing list