[Ffmpeg-devel] Question about parameters
Mon Dec 25 10:36:25 CET 2006
> Michael Niedermayer wrote:
>> On Sat, Dec 23, 2006 at 07:06:39PM -0800, Aaron Williams wrote:
>>> Robert Swain wrote:
>>>> On 24 Dec 2006, at 01:57, Aaron Williams wrote:
>>>>> I am writing a new audio "codec" which basically computes the peak
>>>>> volume for normalizing audio in another pass and am wondering if
>>>>> a standard way for my codec to introduce a new parameter so I can
>>>>> specify the RMS window size? What is the best way for me to add this
>>>>> parameter to pass to my module? My goal is to use the output from
>>>>> to adjust the -vol parameter in the transcoding pass.
>>> I will repeat my question: How does one add codec specific parameters
>>> to the command line of ffmpeg without adding a general purpose option
>>> and without bloating AVCodecContext with new fields not needed for
>>> codecs? There are a number of parameters I wish to add for
>>> normalization such as RMS window size, target volume, thresholds and an
>>> output log filename.
>>> Another audio feature I would like to add is the ability to adjust the
>>> level of individual channels.
>> what you describe are audio filters not audio codecs and while it should
>> be possible to implement filters as raw-pcm codecs that would cause
>> alot of problems (think of chains or networks of codecs) -> its not
>> acceptable for ffmpeg svn and as its not acceptable for svn theres
>> no point in asking here how to solve the various problems ensuing
>> due to this missdesign
> One drawback of a filter is that the normalizing pass is basically a
> dead-end. By definition, two passes are required. The first pass must
> process the entire file and find the peak levels. Also, as was
> previously stated, there currently is no audio filter support. If this
> were present then I might make use of this. Right now the normalization
> pass goes very quickly since it only decodes the audio stream to PCM and
> does no video decoding.
> If there is a misdesign, it is in the basic ffmpeg itself in that there
> is no support for audio filters or for dynamic parameters. Each codec
> and/or filter should be able to supply additional command-line
> parameters as needed. This will add greater flexibility and eliminate
> parameters in core data structures that are only used by one or two
> Actually, it fits quite nicely as a codec, though it does not output
> anything. The current design lets it easily specify the window size
> which works out perfectly. My only requirement is that I be able to
> pass enough information to generate a small log file with the results of
> the analysis.
> As far as modifying the volume output, this can already be accomplished
> during the second transcoding pass by using the -vol parameter. I have
> made a change so that it can take a value like it currently does (i.e.
> defaults to 256) or one can specify the output volume in DB. My goal is
> to also be able to specify a log file to be read at startup with the
> volume adjustments, which should be simple to do.
This reminds me that ffmpeg allows two passes to find the optimal video
bitrate, it produces a log file in the first step that the second step
Wouldn't be natural to add audio normalizing into the current two pass
More information about the ffmpeg-devel