[FFmpeg-user] Preserving perceived loudness when downmixing audio from 5.1 AC3 to stereo AAC
Andy Furniss
adf.lists at gmail.com
Thu Aug 8 00:44:07 CEST 2013
Nicolas George wrote:
> Le decadi 20 thermidor, an CCXXI, Andy Furniss a écrit :
>> Yea, but if the codec does, then maybe the code could try to do
>> the best for the user that requested stereo by using it. The user
>> may not know the inner workings of every codec, but the code can.
>
> Oh, I see. It can not currently work: decoding and filtering are
> completely separate processes. There is currently no API to query a
> codec to know the channel layouts it can produce natively.
Ahh, OK.
>
>> Of course dca should be exempt until it's fixed, but that should
>> be for another thread/further analysis :-)
>
> Is there a trac ticket?
Not by me (yet) and I haven't searched either.
I will file a bug when I've searched out more "normal" in size and
format samples (my channel check is from MA and 1.1G) and had time to
test and look more at the code (which will take a while).
Currently I think there are three possibly separate issues - in summary
Down mix too loud/clipping - may be the same as old aformat issue
discussed here - may not.
Default matrix in libavcodec/dcadata.h looks odd assuming
dca_default_coeffs refers to dca_downmix_coeffs it could explain what I
hear on my MA channel check - mixing L into R -6 db etc.
Why is default being used anyway - I would have expected studio stuff to
have downmix meta and so not hit that matrix anyway - again more
samples/testing/time needed.
More information about the ffmpeg-user
mailing list