[MPlayer-G2-dev] Re: Limitations in vo2 api :(

D Richard Felker III dalias at aerifal.cx
Sat Dec 20 21:03:29 CET 2003


On Sat, Dec 20, 2003 at 09:01:17PM +0200, Andriy N. Gritsenko wrote:
>     Hi, D Richard Felker III!
> 
> Sometime (on Saturday, December 20 at 20:04) I've received something...
> >On Sat, Dec 20, 2003 at 06:35:04PM +0200, Andriy N. Gritsenko wrote:
> >>     Hi, D Richard Felker III!
> >> 
> >> Sometime (on Saturday, December 20 at 17:40) I've received something...
> >> [...skipped...]
> >> 
> >> >This is the same as what I said in approach 1. And here's where it
> >> >gets messy: In order to be useful to the common api layer, the common
> >> >structure at the beginning needs to contain the pointers for the
> >> >node's inputs and outputs. Otherwise it won't help the app build
> >> >chains. So inside here, you have pointers to links. But to what kind
> >> >of links? They have to just be generic links, not audio or video
> >> >links. And this means every time a video filter wants to access its
> >> >links, it has to cast the pointers!! :( Very, very ugly...
> >> 
> >>     Hmm. And how about to put these pointers in layer-specific part of
> >> the structure (outside of common part) while any layer has it's own type?
> >> I don't think application anyway will want these pointers since they
> 
> >These pointers are _exactly_ the thing the app will want to see, so it
> >can build the pipeline. How can you build the pipeline if you can't
> >connect pieces or tell when pieces are connected? :)
> 
> Here goes the example:
> 
> typedef struct {
>   int (*add_out) (struct vp_node_t *node, struct vp_node_t *next, .......);
>   int (*rm_out) (struct vp_node_t *node, struct vp_node_t *next, .......);
>   vp_frame_t *(*pull_frame) (struct vp_node_t *node, .......);
>   .......
> } vp_funcs;
> 
> typedef struct vp_node_t {
>   node_t n;
>   struct vp_node_t *prev;
>   struct vp_node_t *next;
>   vp_funcs *func;
>   .......
> } vp_node_t;
> 
> So when we call link_video_chain(node,next) it will at first test if
> node->func->add_out() exists and call it, otherwise if node->next was
> filled then return error, else set node->next. After that do the same for
> node next. If there was no errors then we assume nodes are linked. For
> example, on pull_frame(node) we could pull frame from previous node by
> node->prev->pull_frame. Calling unlink_video_chain(node,next) we will
> do the same thing as on link_video_chain(). Since node_t is part of
> vp_node_t and pointed to the same then both structures above may be only
> in video_internal.h - application will know nothing about it but it will
> work anyway. :)

This is all the exact design (well actually it's more elaborate since
you have the link layer in between nodes) I've been describing for
months. :)

All I was saying is that we can't use the same structs/functions for
video as we do for audio, etc. Each needs its own api, even if they
are all similar from the calling app's perspective.

> >> I don't see any example when two the same filters may have more than one
> >> connection in the same chain so it's easy.
> 
> >Hrrm, that's the whole point. The speech synth thing was just for fun.
> >Normally multiple inputs/outputs WILL be in the same chain, e.g. for
> >merging video from multiple sources, processing subimages of the
> >video, displaying output on multiple devices at the same time,
> >pvr-style encoding+watching at the same time (!!) etc.
> 
>     Multiple sources are really multiple subchains so we have:
> 
> -----> vf_aaa -.  [node 1]
>  [subchain1]    \     [node 3]
>                 vf_mix ----->
>                 /
> -----> vd_bbb -`  [node 2]
>  [subchain2]
> 
>              ,----> vo1   [node 2]
>             / [subchain1]
> -----> vf_split           [node 1]
>             \ [subchain2]
>              `----> vo2   [node 3]
> 
> As I said before chains must be supported by application only so it's not
> layer's care to keep all subchains in mind. :)

Actually the layer needs to know about the whole thing to make
rendering work. It has to walk the whole chain to get source frames as
filters need them. Also you forgot to consider the most fun example:


-------> vf_subfilter ------>
          |      /|\
         \|/      |
   your_favorite_filters_here

Where vf_subfilter is a filter that processes a subsection of the
image with whatever filters you hook up to its auxiliary output/input.
This example shows why it's not good to think in subchains/subtrees:
the filter pipeline can have loops!

> About muliple links between nodes - you are really suppose that may be
> something alike that:
>  
>                /----------\  [subchain one]
> -------> vf_a [node 1]   vf_b [node 2] ------->
>                \----------/  [subchain two]
> 
> All other causes will be just partial subchains which will be handled by
> filters. :)

I agree it's probably stupid, but there's no reason you can't have
something like that with the current design.

> >If these functions have _video_ in their name, there's no use in
> >having a generic "stream" structure. vp_node_t is just as good!
> 
>     But what I said before is just simple structure for application's
> developers so we prevent all level-specific data from touching by
> application and application's developers will learn only common API. :)

They have to be aware of the nodes anyway, since they're loading them,
providing gui panels to configure them, etc....

Rich






More information about the MPlayer-G2-dev mailing list