[FFmpeg-devel] MOD support for FFmpeg (My GSoC 2010 task starts tomorrow)

Sebastian Vater cdgs.basty
Fri May 28 13:50:04 CEST 2010


Ronald S. Bultje a ?crit :
> Hi,
>
> I had something similar in mind, we (Vitor, me and Sebastian)
> discussed this on IRC earlier. However, we'd like to more-or-less keep
> the mod format even in "decoded" samples.
>
> - write demuxers that can "parse" mod files. These would be small,
> like raw demuxers. The output is AVMEDIA_TYPE_AUDIO with
> CODEC_TYPE_S3M/XYZ etc.
> - write decoders that can "decode" mod file b{it,yte}streams into
> collections of notes, etc. The output would be SAMPLE_FMT_MOD or
> SAMPLE_FMT_COMPOSER. Mans wants it to be called SAMPLE_FMT_KITTEN.
> - the "tracker" would then be a converter in the style of
> audioconvert.c, which converts from SAMPLE_FMT_KITTY to
> SAMPLE_FMT_S16/U8/FLOAT
>   

I doubt that Mans meant SAMPLE_FMT_KITTEN really serious. ;-)
Regarding the name I dislike SAMPLE_FMT_MOD, though, because the MOD
engine can also be used for MIDI support (playback of .MID files), like
OpenCubicPlayer does. It loads the MIDI samples from the GUS patches or
a similar instrument bank, convert the note data from .MID files into
it's internal MOD playback engine and let's rock then...
But I have another idea for a name of it: SAMPLE_FMT_SEQ (for SEQUENCER)...

> - a FIXME here's is how a user would choose the samplerate, maybe a
> AVCodecContext.request_sample_rate in the style of channel_mask?
> - Vitor had another bunch of comments here but I forgot what it was
> exactly. Vitor?
>   

Why not simply using actual fields for samples stuff that is already in
AVCodecContext?
For setting the mixing routine output rate, just set avctx->sample_rate
to the desired value.
To precise a bit:
If it's set to 0, the mixing rate is choosen by FFmpeg depending on the
target hardware (query AVDevice?), if a mixing rate is set and not
directly supported by the hardware, it should rounded to the nearest
supported (e.g. SB16 doesn't support 44100 Hz directly, but 45454 Hz, so
it would round up to that then).
If target is a disk writer, it should default to 44,1kHz, 16-bit stereo
(CD quality), I suggest...

The same can be done for choosing between mono/stereo mixing
(avctx->channels), output bit depth (avctx->bits_per_coded_sample), etc.
For internal mixer data (virtual channels, etc.) we need a new structure
anyway (AVMixer?). The internal mixer of TuComposer as already said,
currently outputs always 32-bit which has then dithered to
avctx->bits_per_coded_sample.
For example, the current Amiga version of TuComposer when using the
Paula audio driver dithers the mixer output 32-bit sample to 14-bit (the
Amiga, although having only 8-bit audio output directly, can play true
14-bit sample quality with a neat trick). But that 14-bit problem isn't
an issue of FFmpeg at all, the Amiga port of SDL deals with that, when
you tell SDL to playback a 16-bit sample. ;-)

> The advantage of this approach is that we can render between mod
> formats without complete re-encoding, we only have to reformat the
> b{it,yte}stream from the note collections.
>   

And convert unsupported sample formats to a supported one, e.g. original
MOD format doesn't support 16-bit samples, some formats support stereo
samples, but some not, etc.). It's not just the note data.

We need, though some kind of a virtual playback, which simulates
complete playback (this is required for calculating stuff like playback
time or creating indices for seek support). The virtual playback is
practically the same as the normal one, with the difference it output's
to a "null" mixer for being faster.
The null mixer is practically a normal mixer with always volume
in-/output of 0 (skip actual resampling part and just advance offsets).
At least this is the way, I did implement it currently.

This is also useful for a really smart conversation routine, which
doesn't just translate the host channels, but also can convert
ImpulseTracker modules using non-NNA cut correctly to formats which
don't support anything else than NNA cut.

> We've suggested that Sebastian starts with #1, then moves to #2. For
> #3 he could write a tucomposer wrapper to start with, but eventually
> we'd want something in FFmpeg. He can start from tucomp code
>   

Wrapper? There won't be a wrapper, TuComposer will be actually
integrated after doing all the refactor stuff (i.e. make it FFmpeg style
guide compliant) and the original TuComposer project will be discarded
completely). Just to clarify that. No wrappers, etc.
After all if it works well in FFmpeg, there's no reason to have
TuComposer as an external project anymore, i.e. the TuComposer download
link from my site will be replaced by a link to FFmpeg ;)
Since FFmpeg works with very small changes (as ami_stuff said) very well
on a native m68k Amiga, I don't see any problem with that.

> where-ever he wants, but the code will eventually live in FFmpeg SVN
> and thus go through full review here. He'll probably need a ff-soc
> repo account to put his work-in-progress because keeping track of 100
> patches will make us dizzy.
>   

I guess 100 patches would not only make us dizzy... ;)
It would be enough though, for the first time, that I get only rights to
commit to /libavcomposer/* (or how we will call that finally), if we
finally choose the new library approach. And later when it's finished
enough to be usable we start integrating that into lavc. This way,
development is in first stage completely independent of remaining and we
don't harm much by changing structures within /libavcomposer/*.
BTW, what about calling this libavsequencer?
Since using libavcomposer would cause a name clash with libavcodec when
abbreviated (lavc? Which one is meant now?), libavsequencer would be
more unique anyway (lavs).

-- 

Best regards,
                   :-) Basty/CDGS (-:




More information about the ffmpeg-devel mailing list