[MPlayer-G2-dev] Re: Limitations in vo2 api :(

D Richard Felker III dalias at aerifal.cx
Sat Dec 20 12:11:42 CET 2003


On Sat, Dec 20, 2003 at 12:26:23PM +0200, Andriy N. Gritsenko wrote:
> >IMO the vp/vo/muxer[/demuxer?] integration is only appropriate if it
> >can be done at a level where the parts share a common api, but don't
> >in any way depend on one another -- so they can still all be used
> >independently.
> 
>     Fully agree with that.

I don't think you were agreeing with what I said, but with something
different...

> There must be common (stream-independent) part
> of the API - that part will be used by any layer and it must contain such
> things as stream/codec type, PTS, duration, some frame/buffer pointer,
> control and config proc pointers, and may be some others. Layer-specific
> data (such as audio, video, subs, menu, etc. specific) must be specified
> only in that layer.

Eh? Are you saying to use a common transport/pipeline api for audio,
video, and subs??? This is nonsense!! They have _very_ different
requirements. That's not to say you can't interconnect them; you could
have a module that serves both as a video and audio node. The most
common case for this would be a 'visualization' effect for music.
Another more esoteric but fun example is:


vd ----> vf_split ----------------------------------------> vo
           \     [video layer]
            `---> vf/sf_ocr
                    \            [subtitle layer]
                     `--> sf/af_speechsynth
                             \
                              \           [audio layer]
                               `--> af_merge -------------> ao
                                      /`
ad ----------------------------------'


While this looks nice in my pretty ascii diagram, the truth of the
matter is that audio, video, and text are nothing alike, and can't be
passed around via the same interfaces. For example, video requires
ultra-efficient buffering to avoid wasted copies, and comes in
discrete frames. Text/subtitles are very small and can be passed
around in any naive way you want. Audio should be treated as a
continuous stream, with automatic buffering between filters.

>     This way we could manipulate "connections" from streams to muxer in
> some generic way and be free to have any number of audios/videos/subs in
> resulting file.

The idea is appealing, but I don't think it can be done... If you have
a suggestion that's not a horrible hack, please make it.

>     Also when we have some common part then wrapper may use only that
> common part and it'll be as simple as possible and player/encoder don't
> need to know layer internals and will be simpler too. This includes your
> example above about muxer/demuxer without any codecs too. :)
>     Also when we have common part in some common API then we'll document
> that common part only once and specific parts also once so it'll reduce
> API documentation too. :)

And now we can be slow like Xine!! :))))

Rich




More information about the MPlayer-G2-dev mailing list