[MPlayer-G2-dev] Re: Limitations in vo2 api :(

Andriy N. Gritsenko andrej at lucky.net
Sat Dec 20 11:26:23 CET 2003


    Hi, D Richard Felker III!

Sometime (on Friday, December 19 at 21:58) I've received something...
>On Fri, Dec 19, 2003 at 04:49:11PM +0200, Andriy N. Gritsenko wrote:
>> >Also, as i already said on irc, we should IMHO integrate the
>> >vo api into the vp and drop it. It would get us one api less
>> >to document/learn and also remove some duplicated work as
>> >a vo modules is much like a vf w/o an output.
>> 
>>     In addition to that we will have unified API for muxer too, i.e.
>> muxer will be "connected" to vp API the same way as vo and this will
>> help a lot. :)  Encoder software will just make that "connection" and
>> do a-v sync. I think it may be done alike mplayer do it but instead of
>> real time for sync it must be audio PTS.
>>     So add my vote for unifying vo and vp APIs. :)

>I'm still not sure about this. There are two sides to it:

>On the one hand, unified api means fewer apis to learn and document,
>less wrapper code, and simpler implementation for the player app.

>On the other, having separate apis means you can take the layers apart
>and use one without the other. For instance, demuxer and muxer layer
>could be used without any codecs or video processing to repair broken
>files, or to move data from mkv container to nut container (hey, I
>guess that also qualifies as repairing broken files :)))).

>Actually I'm somewhat inclined to integrate more into the vp layer
>directly, but certainly not until after the audio subsystem has been
>overhauled too. And I _don't_ want a repeat of the "let's make
>everything part of the config layer!!" fiasco...

>IMO the vp/vo/muxer[/demuxer?] integration is only appropriate if it
>can be done at a level where the parts share a common api, but don't
>in any way depend on one another -- so they can still all be used
>independently.

    Fully agree with that. There must be common (stream-independent) part
of the API - that part will be used by any layer and it must contain such
things as stream/codec type, PTS, duration, some frame/buffer pointer,
control and config proc pointers, and may be some others. Layer-specific
data (such as audio, video, subs, menu, etc. specific) must be specified
only in that layer.
    This way we could manipulate "connections" from streams to muxer in
some generic way and be free to have any number of audios/videos/subs in
resulting file.
    Also when we have some common part then wrapper may use only that
common part and it'll be as simple as possible and player/encoder don't
need to know layer internals and will be simpler too. This includes your
example above about muxer/demuxer without any codecs too. :)
    Also when we have common part in some common API then we'll document
that common part only once and specific parts also once so it'll reduce
API documentation too. :)

    With best wishes.
    Andriy.




More information about the MPlayer-G2-dev mailing list