[FFmpeg-devel] [RFC] libavfilter audio API and related issues

Stefano Sabatini stefano.sabatini-lala
Mon Apr 5 13:55:43 CEST 2010


Follow some notes about a possible design for the audio support in
libavfilter.

AVFilterSamples struct 
======================

(Already defined in afilters, but renamed AVFilterBuffer at some
point.)

Follows a possible definition (with some differences whit respect to
that currently implemented in afilters):

typedef struct AVFilterSamples
{
    uint8_t *data;
    int data_size;   /* data size in bytes */
    enum SampleFormat format;

    unsigned refcount;

    /** private data to be used by a custom free function */
    void *priv;
    void (*free)(struct AVFilterSamples *samples);
} AVFilterSamples;

typedef struct AVFilterSamplesRef
{
    AVFilterSamples *samples;
    uint8_t *data;              ///< samples data
    unsigned data_size;

    int64_t pts;                ///< presentation timestamp in units of 1/AV_TIME_BASE

    unsigned sample_rate;       ///< number of sampler per second

    int perms;                  ///< permissions
} AVFilterSamplesRef;


Note that I'm supposing the samples always come from a mono-channel
source. If we want to support samples coming from N-channels sources
this needs to be extended, the struct in this case should be possibly
more general than data/linesize for the picture, as with pictures we
only deal with maximum 4 channels.

Another idea suggested by Michael in [1] is to generalize
data/linesize to make it able to store samples too.

Note also that SampleFormat is quite different from PixelFormat, as it
only refers to the representation of a single sample, so it only gives
information about a single channel.


Processing API
==============

Based on the video API that could consists of the functions:
* avfilter_get_audio_buffer()
* avfilter_request_samples()
* avfilter_poll_samples() ?
* avfilter_send_samples()

avfilter_send_samples() corresponds to
avfilter_{start_frame,draw_slice,end_frame}, for audio we don't have
the concept of "frame", so this function should be enough.


Audio/video synchronization
===========================

Some design work has to be done for understanding how request_samples()
and request_frame() can work togheter.

I'm only considering ffplay for now, as it looks simpler than ffmpeg.

Currently audio and video follows two separate paths, audio is
processed by the SDL thread thorugh the sdl_audio_callback function,
while the video thread reads from the video queue whenever there are
video packets available and process them.

No A/V synchronization mechanism is currently provided in ffplay, as
A/V was supposed to be processed real-time with no further processing.


Audio API
=========

Much of what discussed in [3] is also relevant to libavfilter, and
much of that functionality may be implemented in it if we decide to
follow that route, e.g. audio mixing may be implemented in a filter.

Also it isn't clear where the audio API should be implemented,
libavfilter will need to use the resampling functionality, if we don't
want to make lavfi depend on lavc then this should be moved somewhere
else (e.g. libavresample?).


References
==========

[1] afilters repo: svn://svn.ffmpeg.org/soc/afilters
[2] gsoc discussion: http://thread.gmane.org/gmane.comp.video.ffmpeg.soc/6163/focus=6178
[3] http://wiki.multimedia.cx/index.php?title=FFmpeg_audio_API
[4] http://wiki.multimedia.cx/index.php?title=Libavfilter
[5] http://wiki.multimedia.cx/index.php?title=FFmpeg_Summer_Of_Code_2010#Libavfilter_audio_work

....


Comments are welcome.

Regards.
-- 
FFmpeg = Fantastic and Fancy Mortal Problematic Erotic Gangster



More information about the ffmpeg-devel mailing list