[MPlayer-G2-dev] Re: Limitations in vo2 api :(

D Richard Felker III dalias at aerifal.cx
Sun Dec 21 07:04:37 CET 2003

On Sat, Dec 20, 2003 at 10:08:47PM +0100, Attila Kinali wrote:
> On Sat, 20 Dec 2003 06:11:42 -0500
> D Richard Felker III <dalias at aerifal.cx> wrote:
> > 
> > vd ----> vf_split ----------------------------------------> vo
> >            \     [video layer]
> >             `---> vf/sf_ocr
> >                     \            [subtitle layer]
> >                      `--> sf/af_speechsynth
> >                              \
> >                               \           [audio layer]
> >                                `--> af_merge -------------> ao
> >                                       /`
> > ad ----------------------------------'
> Just a little question here, do you have an idea how you'll handle
> frame delays in the chain. Ie if the chain is split into two parts
> who are merged later again. One is a direct connection (0 delay)
> and the other does some fancy computation where it needs a few
> frames in advance to compute a frame (x frames delay).
> Ie here you'd have not only to store x frames on the 0 delay
> chain but also to realize that these two chainse have different
> delays and that you have to pass more frames to one side to get
> one frame out at the merge point.
> It gets even complicater if you have 2 different sources
> providing 2 chains with different delays which are merged
> together at the end.
> Or is this already handled ?

It's naturally handled just by the way the system works. Suppose in
the above example, the ao wants data in advance, to buffer. Well then
af_merge will be requesting data from both its inputs, which will
propagate back, causing vf_split to request input too. Thus the video
will be decoded in advance too. vf_split is responsible for delivering
frames whenever one of its outputs asks for them, so since the vo
won't be asking for them as soon, vf_split is going to have to
implement some sort of queue for output frames. IMO this isn't a
design issue with the layer itself, only an implementation issue for
filters with multiple outputs.


More information about the MPlayer-G2-dev mailing list