[MPlayer-dev-eng] Re: [xine-devel] Re: [gst-devel] Fwd: Re: [UCI-Devel] Re: Common Opensource codec API

D Richard Felker III dalias at aerifal.cx
Tue Dec 30 06:09:27 CET 2003


On Sun, Dec 28, 2003 at 07:20:49PM +0100, Enrico Weigelt wrote:
> On a first thought, I see several modules in this project:
> 
> * audio codecs
>     --> gets in an encoded audio stream and puts out another encoding
>         which can be understood by the clients (e.g. audio hardware)
>     --> allows querying available audio codecs and their options
>     
> * audio playback 
>     --> gets in a audio stream (encoded in plain encodings like PCM)
>     --> allows querying available audio playback devices, their options
>         and supported encodings (ie. to use mpeg support, etc)
> 
> * audio recording
>     --> pulls out a audio stream (encoded in a plain encoding like PCM)
>     --> allows querying available audio recording devices, their options
>         and supported encodings ...

The fact that you classify them like this shows that you have little
if any understanding of codecs or audio/video processing. The correct
classification is:

* audio encoding
    --> gets in raw audio and puts out compressed audio in an encoding
    --> allows querying available encoders and their options

* audio decoding
    --> gets in compressed audio frames and outputs raw audio

* audio recording
    --> outputs an audio stream obtained from a recording device with
        the necessary timestamp information to correct for timer drift

* audio playback
    --> sends an audio stream to a playback device
    --> provides a means to determine buffer depth and delay so that
        the caller knows which sample is being played at any moment.

Further, these are all _very_ separate tasks. Recording and playback
have no relation whatsoever, and both are very device-specific and
thus player-specific (a player made to play on one device will not
have the same needs as a player meant to be used on a different
device). On the other hand, encoding and decoding (which are very much
_separate_ processes!) are entirely hardware-independent.

(Same for video...)

In any case, ALL OF THIS STUFF ALREADY EXISTS in the form of
libavcodec and MPlayer (G1 and/or G2). We do not need you to reinvent
the wheel for us. We already understand how to make general-purpose
code that's also fast, and we're in the process of doing it. What part
of this do you fail to understand???

As far as MPlayer is concerned, you prove that you ideas are
worthwhile by writing CODE that WORKS and that's FAST, not by spamming
somebody else's mailing list telling them they have to do things YOUR
WAY when you've never written a line of useful code. If all you can do
is come up with abstract ideas with no relation to the actual problems
that arise in a/v processing, then you need to stop wasting money on
college cs courses with stupid C++/Java/C# professors and dig into
some real code.

Rich





More information about the MPlayer-dev-eng mailing list