[MPlayer-G2-dev] Re: A new vf layer proposal...

Andriy N. Gritsenko andrej at lucky.net
Fri Sep 12 06:17:28 CEST 2003


    Hi, D Richard Felker III!

Sometime (on Friday, September 12 at  5:34) I've received something...
>On Fri, Sep 12, 2003 at 03:06:05AM +0300, Andriy N. Gritsenko wrote:
>>     Hi, D Richard Felker III!

>> Sometime (on Friday, September 12 at  2:42) I've received something...
>> >Early in G2 development we discussed changes to the vf layer, and some
>> >good improvements were made over G1, but IMO lots of it's still ugly
>> >and unsatisfactory. Here are the main 3 problems:

>> >1) No easy way to expand the filter layer to allow branching rather
>> >   than just linear filter chain. Think of rendering PIP (picture in
>> >   picture, very cool for PVR type use!) or fading between separate
>> >   clips in a video editing program based on G2.

>> >2) The old get_image approach to DR does not make it possible for the
>> >   destination filter/vo to know when the source filter/vd is done
>> >   using a particular reference image. This means DR will not be
>> >   possible with emerging advanced codecs which are capable of using
>> >   multiple reference frames instead of the simple I/P/B model.

>> >3) The whole vf_process_image loop is ugly and makes artificial
>> >   distinction between "pending" images (pulled) and the old G1
>> >   push-model drawing.

>> >Actually (3) has a lot to do with (1).

>> >So the proposal for the new vf layer is that everything be "pull"
>> >model, i.e. the player "pulls" an image from the last filter, which in
>> >turn (recursively) pulls an image from the previous filter (or perhaps
>> >from multiple previous filters!).

>> Agree 100%, it's that I hoped already so we could build custom chain with
>> branches - filter with more than one input(s) may pull all inputs at the
>> same. :)
>> When we pull images from two or more streams then we could have a sync
>> problem but that problem could be solved if we run pull for "expected"
>> time. So decoder (or other stream source) puts pts into image structure
>> and then any filter could decide if that pts is over expected then just
>> return null frame and keep that pulled until it'll fit expected time.
>> May be I said it not very clean - sorry for my bad English then. :)

>Image structure already has pts, so that's no problem. :) Normally for
>combining filters you'd be using several fixed-fps streams (with same
>fps) as input so it wouldn't matter too much anyway -- variable fps is
>mostly for ugly low quality stuff like asf and rm or for handling
>made-for-tv stuff from mixed sources (24/30/60 fps).

    Not only. With video editing program you mentioned above you may want
to use some filter to scale speed of fragment (my former fiancee likes to
make music videos so I saw that many times) to be faster or slower and/or
even mix two video streams with different time scaling. :)

>BTW there's also the question of how to do filters that have multiple
>outputs, and it's a little more complicated, but I think they can be
>done as two filters sorta linked together. In any case, there doesn't
>seem to be anything in my design that precludes filters with multiple
>outputs, so I'm happy.

    I think that multiple-output filter is very rare case and I even
cannot see real example but two-screen display of video or cloning for
network streaming. :)

>Thanks for the comments!

    Thank you too!

    With best wishes.
    Andriy.



More information about the MPlayer-G2-dev mailing list