[MPlayer-G2-dev] Re: Limitations in vo2 api :(

D Richard Felker III dalias at aerifal.cx
Sat Dec 20 15:05:22 CET 2003


On Sat, Dec 20, 2003 at 02:31:42PM +0200, Andriy N. Gritsenko wrote:
> >I don't think you were agreeing with what I said, but with something
> >different...
> 
>     It may be. First of all my native language is too different from
> English so may be (and it seems they are) some misunderstandings there.

No problem. You write in english rather well, actually.

> >> There must be common (stream-independent) part
> >> of the API - that part will be used by any layer and it must contain such
> >> things as stream/codec type, PTS, duration, some frame/buffer pointer,
> >> control and config proc pointers, and may be some others. Layer-specific
> >> data (such as audio, video, subs, menu, etc. specific) must be specified
> >> only in that layer.
> 
> >Eh? Are you saying to use a common transport/pipeline api for audio,
> >video, and subs??? This is nonsense!! They have _very_ different
> >requirements.
> 
>     Hmm, it seems I've told some misunderstandable again, sorry. I meant
> not common transport but common API only, i.e. muxer must don't know
> about video/audio/subs/etc. - encoder program must pass some common API
> structure to it and so on. ;)
 *snip*
>     Let's illustrate it in diagramm:
 *snip*
> [video] [audio] [common] -- are API structures of video layer, audio
> layer, and common respectively.
> 
>     I hope now I said it clean enough to be understandable. :)

Yeah, IMO it's a bit simpler than that even, though. For example, the
vp (and eventually ap :) layer just uses the demuxer api to get
packets from the demuxer. The link is much simpler than links inside
the actual video pipeline: the vd or demuxer-wrapper (not sure which
we'll use yet) just has a pointer to the demuxer stream, which was
given to it by the calling app.

> >While this looks nice in my pretty ascii diagram, the truth of the
> >matter is that audio, video, and text are nothing alike, and can't be
> >passed around via the same interfaces. For example, video requires
> >ultra-efficient buffering to avoid wasted copies, and comes in
> >discrete frames. Text/subtitles are very small and can be passed
> >around in any naive way you want. Audio should be treated as a
> >continuous stream, with automatic buffering between filters.
> 
>     Yes, I think the same. What I said is just API from vd to vo must be
> unified (let's say - video stream API) but that API must be inside video
> layer. Application must call video layer API for "connection" all from vd
> to vo in some chunk and that's all. Application will know only common
> data structures of members of the chunk. And here goes the same for audio
> chunk(s) and any other. Common data structures means application's
> developers may learn only common API structure and API calls for layers
> and it's all. Let's make it simpler. ;)

That's actually an interesting idea: using the same node/link
structure for all the different types of pipelines, so that the app
only has to know one api for connecting and configuring them. I'm
inclined to try something like that, except that it would mean more
layers of structures and more levels of indirection...

In any case, I'll try to make the api look similar for all three
pipelines (video/audio/sub), but I'm just not sure if it's reasonable
to use the same structures and functions for setting them up.

> >The idea is appealing, but I don't think it can be done... If you have
> >a suggestion that's not a horrible hack, please make it.
> 
>     I have that idea in thoughts and I've tried to explain it above. If
> something isn't clear yet, feel free to ask. I'll glad to explain it.

Basically the problem is this. Video pipeline nodes have a structure
very specific to the needs of video processing. The links between the
vp nodes have structures very specific to image/buffer pool
management. As for the audio pipeline, I envision its links having a
different sort of nature, managing dynamic shrinking/growing fifo-type
buffers between nodes and keeping track of several positions in the
buffer.

I see two approaches to having a unified api for building the
pipelines:

1. Storing the linked-list pointers at the beginning of the structures
   for each type, and type casting so that they can all be treated the
   same. IMO this is a very ugly hack.

2. Having a second layer on top of the underlying pipeline layers for
   the app to use in building the pipelines. This seems somewhat
   wasteful, but perhaps it's not.

> >>     Also when we have common part in some common API then we'll document
> >> that common part only once and specific parts also once so it'll reduce
> >> API documentation too. :)
> 
> >And now we can be slow like Xine!! :))))
> 
>     I'm not sure about that. :)

OK, sorry, that was a really cheap flame... :)

>     With best wishes.
>     Andriy.

Rich




More information about the MPlayer-G2-dev mailing list