[FFmpeg-devel] [RFC] libavfilter audio API and related issues
S.N. Hemanth Meenakshisundaram
Sun May 23 07:37:18 CEST 2010
On 05/02/2010 12:08 PM, Stefano Sabatini wrote:
> On date Wednesday 2010-04-28 07:07:54 +0000, S.N. Hemanth Meenakshisundaram encoded:
>> Stefano Sabatini<stefano.sabatini-lala<at> poste.it> writes:
>>> Follow some notes about a possible design for the audio support in
>>> AVFilterSamples struct
>>> (Already defined in afilters, but renamed AVFilterBuffer at some
>>> Follows a possible definition (with some differences whit respect to
>>> that currently implemented in afilters):
>>> Audio/video synchronization
>>> Some design work has to be done for understanding how request_samples()
>>> and request_frame() can work togheter.
>>> I'm only considering ffplay for now, as it looks simpler than ffmpeg.
>>> Currently audio and video follows two separate paths, audio is
>>> processed by the SDL thread thorugh the sdl_audio_callback function,
>>> while the video thread reads from the video queue whenever there are
>>> video packets available and process them.
>> Currently, the sdl audio callback gets a decoded audio buffer via the
>> audio_decode_frame call and then seems to be doing AV sync via the
>> synchronize_audio call.
>> I was thinking about replacing this with the audio_request_samples function
>> suggested above. This is similar to what happens in video. The request_samples
>> would then propagate backwards through the audio filter chain until the input
>> audio filter (src filter) calls the audio_decode_frame to get decoded audio
>> samples and then passes them up the filter chain for processing.
> Yes something of the kind get_filtered_audio_samples().
> Note also that the filterchain initialization is currently done in
> video_thread(), that should be moved somewhere else.
>> Does this sound ok? Since the sdl_audio_callback will be making a
>> synchronize_audio call only after this, any additional delay introduced by
>> filtering would also get adjusted for.
> Looks (sounds?) fine, a more general solution may require a
> synchronization filter, but I suspect this would require a significant
> revamp of the API.
I started off trying to make the ffplay changes required for audio
filtering just to get an idea of what all will be required of an audio
filter API. Attached is a rudimentary draft of the changes. It is merely
to better understand the required design and based on this I have the
following questions and observations about the design:
1. ffplay currently gets only a single sample back for every
audio_decode_frame call (even if encoded packet decodes to multiple
samples). Should we be putting each sample individually through the
filter chain or would it be better to collect a number of samples and
then filter them together?
2. Can sample rate, audio format etc change between samples? If not, can
we move those parameters to the AVFilterLink structure as Bobby
suggested earlier? The AVFilterLink strructure also needs to be generalized.
3. The number of channels can also be stored in the filter link right?
That way, we will know how many of the data pointers are valid.
4. Do we require linesize for audio. I guess linesize here would
represent the length of data in each channel. Isn't this already
captured by sample format? Can different channels ever have different
datasizes for a sample?
5. Is it necessary to have a separate num_samples value in the BufferRef
or Buffer (in case we filter multiple samples at a time)? Can we instead
capture it as part of a more useful 'datasize' variable that can be
quickly be used for copying the data between filters?
Also if we are converting AVFilterPic structure to a more generic
AVFilterBuffer that is referred to by an AVFilterPicRef and
AVFilterBufferRef, should the video specific items like PixFormat be
removed and kept confined to PicRef and BufferRef?
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
More information about the ffmpeg-devel