[MPlayer-G2-dev] Re: Limitations in vo2 api :(
D Richard Felker III
dalias at aerifal.cx
Sat Dec 20 19:16:50 CET 2003
On Sat, Dec 20, 2003 at 06:35:04PM +0200, Andriy N. Gritsenko wrote:
> Hi, D Richard Felker III!
> Sometime (on Saturday, December 20 at 17:40) I've received something...
> >This is the same as what I said in approach 1. And here's where it
> >gets messy: In order to be useful to the common api layer, the common
> >structure at the beginning needs to contain the pointers for the
> >node's inputs and outputs. Otherwise it won't help the app build
> >chains. So inside here, you have pointers to links. But to what kind
> >of links? They have to just be generic links, not audio or video
> >links. And this means every time a video filter wants to access its
> >links, it has to cast the pointers!! :( Very, very ugly...
> Hmm. And how about to put these pointers in layer-specific part of
> the structure (outside of common part) while any layer has it's own type?
> I don't think application anyway will want these pointers since they
These pointers are _exactly_ the thing the app will want to see, so it
can build the pipeline. How can you build the pipeline if you can't
connect pieces or tell when pieces are connected? :)
> I don't see any example when two the same filters may have more than one
> connection in the same chain so it's easy.
Hrrm, that's the whole point. The speech synth thing was just for fun.
Normally multiple inputs/outputs WILL be in the same chain, e.g. for
merging video from multiple sources, processing subimages of the
video, displaying output on multiple devices at the same time,
pvr-style encoding+watching at the same time (!!) etc.
In any case, I was talking about all the links including primary input
and output, not just the extras.
> API just will have some proc alike (assume that stream_t is common
> node structure, I don't make smth better yet):
> stream_t *open_video_filter(char *name);
> int link_video_chain(stream_t *prev, stream_t *next);
> int unlink_video_chain(stream_t *prev, stream_t *next);
If these functions have _video_ in their name, there's no use in
having a generic "stream" structure. vp_node_t is just as good!
> so only application will track all changes within that chain. :) If you
> thinks about frame processing then it's not a problem at all. Since we
> leave only pull-way getting frames then each last processing unit
> (vo/ao/muxer) will have some proc alike:
> int vo_process_frame(stream_t *vos, double pts, double duration);
> I understand it's defferent from G1's API but it's simpler and more clean
> IMHO. I think it also will help a lot to do A-V sync.
A/V sync doesn't need help here. The only place there's a problem
right now is handling broken file formats and codecs with bogus pts.
Timestamps (exact, no float mess) already pass all the way down the
pipeline, and then the calling app regulates a/v sync and telling the
vo when to display the next frame.
More information about the MPlayer-G2-dev