[MPlayer-G2-dev] vo3

Ivan Kalvachev ivan at cacad.com
Fri Jan 2 00:54:34 CET 2004


D Richard Felker III said:
>> User-Agent: SquirrelMail/1.4.1
>
> Ivan, if you expect a reply, please get a mailer that understands how
> to wrap lines at 80 columns and preserve formatting when quoting. Your
> broken SquirrelMail converted all the quotes into multi-hundred-column
> unreadable jibberish.
The problem is that nowdays I use public computers for internet and having
my favorite mail client configured properly is luxury.

Anyway wrap lines at 60, not 80 ;)

>
> On Sun, Dec 28, 2003 at 04:51:23AM +0200, Ivan Kalvachev wrote:
>> D Richard Felker III said:
>> > On Sat, Dec 27, 2003 at 01:01:22AM +0200, Ivan Kalvachev wrote:
>> >> Hi, here is some of my ideas,
>> >> i'm afraid that there already too late to be implemented, as
>> >> dalias is coding him pipeline system, while i have not finished the
>> >> drafts
>> >> already, but feel free to send comments...
>> >
>> > Time to pour on the kerosine and light the flames... :)
>> Your are very bad in making good points, you always fall in flame. I
>> noticed that you like flames, even if you burn in them sometimes.

>
> This line was a joke. Maybe if you'd taken it as such you would have
> been more thoughtful in your responses to what followed.

Well as I said bellow I didn't hear any good points from you (1'st time).
I also made a joke, probably the level of humor is too low, and better be
dropped out completely.

Do you know that there is a law in China that obligates people to always
smile on the street. Simply the cities are so over-crowded that one
negative person may spoil the day of 1 Million citizen.

It is very easy to push people in internet. And because it is easy we
should be friendly even when we talk with idiots. Otherwise the chain
reaction will spoil more than our day.

>> >> Here are few features that I'm trying to achieve:
>> >> ­ decreasing memcpy by using one and same buffer by all filters
>> >> that can do it (already done in g1 as Direct Rendering method 1)
>> >> ­ support of partial rendering (slices and DR m2)
>> >
>> > These are obviously necessary for any system that's not going to be
>> > completely unusably sucky. And they're already covered in G1 and G2
>> > VP.

>> flame. I know that, you know that, everybody know that.
>
> Huh? Who am I flaming? I don't follow.

I just mean that the quote and the reply could have been skipped. They
don't say anything. But there is “suck” in it.

>> >> ­ ability to quickly reconfigure and if possible - to reuse data that
>> >> is already processed (e.g. we have scale and the user resizes the
>> >> image, - only images after scale will be redone),
>> >
>> > In my design, this makes no sense. The final scale filter for resizing
>> > would not pass any frames to the vo until time to display them.
>> final scale filter?!!!
>> How many scale filters do you have?

> Normally only one. But suppose you did something like the following:
>
> -vf scale=640:480,pullup
>
> with output to vo_x11. The idea is that since you'll be resizing DVD
> to square pixels anyway, you might as well do it before the inverse
> telecine and save some cycles.

Just for the protocol;) I'm sure you know that scaling interlaced frames
vertically will mangle interlacing.

The rest is discussed latter bellow.

>
> Now the user resizes the window... In yours (and Arpi's old) very bad
> way, the scale filter gets reconfigured, ruining the fields. If you
> don't like this example (there are other ways to handle interlacing)
> then just consider something like denoising with scaling. My principle
> in response is that the player should NEVER alter configuration for
> any filters inserted manually by the user. Instead, it should create
> its own scale filter for dynamic window resizing and final vo
> colorspace conversion.
>

I'm sorry to say it but you cannot see the forest because there are too
many trees. (that's saying/proverb)

I give it as example. I like to give example that are not totally
fictitious.

And as you may have noticed the scale filter is the final one. So having 2
scale one after another is not very good.

And as Diego says it could be used for resizing while paused, and stuff
like that.

About scale. Yep, I agree that it is good to have one scale filter at the
end. If I am not wrong, now in G1 the scale filter is inserted constantly
at one and same position, this way preventing automatic conversion except
in the ordinary case.

The other big problem with scale is that it makes way too many things. It
makes format conversion, it scales, it makes format conversion and scale at
the same time. Of course this is done to maximize speed.


>> >> safe seeking, auto-insertion of filters.
>> >
>> > What is safe-seeking?
>> When seeking filters that have stored frames should flush them
>> For example now both mpeg2 decoders don't do that, causing garbage in
>> B-Frames decoding after seek. Same apply for any temporal filter.
>> In G1 there is control(SEEK,...), but usually it is not used.
>
> OK, understood perfectly.
>
>> > Auto-insertion is of course covered.
>> I'm not criticizing your system. This comment is not for me.
>> Or I hear irony?
>
> And I wasn't criticizing yours, here. I was just saying it's not a
> problem for either system.

How about new level of filter insertion - runtime insertion without
dropping frames?

>
>> >> In short the ideas used are :
>> >> ­ common buffer and separate mpi ­ already exist in g1 in some form ­
>> >> counting buffer usage by mpi and freeing after not used ­ huh, sound
>> >> like java :O
>> >
>> > No. Reference counting is good. GC is idiotic. And you should never
>> >free
>> > buffers anyway until close, just repool them.
>> I didn't say that. Free buffer is buffer that is not busy and could be
>> reused.
>> Moreover I don't like the way frames are locked in your code. It doesn't
>> seem obvious.
>
> In VP, you will _only_ lock frames if you need to keep them after
> passing them on to the next filter. Normally you shouldn't be doing
> this.
>

At first I could understand what GC is, after i send my reply i realized
that it comes from Garbage Collector :O I hate GC, I don't like java
because of GC. As you may have missed (and misleaded by my java appointment)
is that buffers have counters. It is simple - on MPI allocation buffer
counter is increased , on MPI release it is decreased. When usage is 0 then
the buffer may be reused.
You see – no locking needed.

On another hand buffer reusing is of cource connected to
skipped_blocks optimization.
But I still have no idea how to do it :(


>> The goal in my design is all the work of freeing/quering to
>> be moved to vf/vp functions. So in my design you will see only
>> get_image/release_image, but never lock/unlock. Because having buffer
>> mean that you need it. (well most of the time)
>
> Keep in mind there's no unlock function. It's just get/lock/release.
>
> Perhaps you should spend some time thinking about the needs of various
> filters and codecs. The reason I have a lock function is that the
> _normal_ case is passing your image on to the next filter without
> keeping it. Images could start out with 2 locks (1 for source, 1 for
> dest) and then the source would have to explicitly release it when
> finished, but IMHO this just adds complexity since most filters should
> never think about a frame again once sending it out.

Yes this is what I do.
The only differense is that simple filter may release frame after
it pass it to the next one.

>
>> >> ­ allocating all mpi&buffer before starting drawing (look obvious,
>> >> doesn't it?) ­ in G1 filters had to copy frames in its own buffers or
>> >> play hazard by using buffers out of their scope
>> >
>> > Yes, maybe G1 was broken. All the codecs/filters I know allocate
>> > mpi/buffers before drawing, though.
>> In G1 if you draw_slice out-of-order it is possible to go to a filter
>> that
>> haven't yet allocated buffer for this frame - some frames are allocated
>> on
>> put_frame.
>
> This is because G1 is horribly broken. Slices should not be expected
> to work at all in G1 except direct VD->VO.

Ouch. But your current implementation is based on the same principle.)

>
>> That's also the reason to have one common process()!
>
> I disagree.
>
>> >> ­ using flag IN_ORDER, to indicate that these frames are "drawn" and
>> >> there won't come frames with "earlier" PTS.
>> >
>> > I find this really ugly.
>> It's the only sane way to do it, if you really do out-of-order
>> processing.
>
> No, the draw_slice/commit_slice recursion with frames getting pulled
> in order works just fine. And it's much more intuitive.
>
>> >> ­ using common function for processing frame and slices ­ to make
>> >> slice
>> >> support more easier
>> >
>> > This can easily be done at the filter implementation level, if
>> > possible. In many cases, it's not. Processing the image _contents_ and
>> > the _frame_ are two distinct tasks.
>> Not so easy. Very few filters in G1 support slices, mainly because it
>> is
>> separate chain.
>
> No, mainly because the api is _incorrect_ and cannot work. Slices in
> G1 will inevitably sig11.
>

Haven't you noticed that in G1 slice-ing is turned on by default?
;)

>> > One thing omitted in G2 so far is allowing for mixed buffer types,
>> > where different planes are allocated by different parties. For
>> > example, exporting U and V planes unchanged and direct rendering a new
>> >Y plane. I'm not sure if it's worth supporting this, since it would be
>> > excessively complicated. However, it would greatly speed up certain
>> > filters such as equalizer.
>> Yes I was thinking about such hacks. But definitely they are not worth
>> implementing. Matrox YUV mode need such hack, but it could be done in vo
>> level.
>
> Actually it doesn't. The YV12->NV12 converter can just allow direct
> rendering, with passthru to the VO's Y plane and its own U/V planes.
> Then, on draw_slice, the converter does nothing with Y and packs U/V
> into place in the VO's DR buffer. This is all perfectly valid and no
> effort to implement in my design.
>
> The difficult case is when you want to export some planes and DR
> others...
>

Just one tiny-mini problem. What will be locked if a filter needs the frame
for later processing?

Any separate buffer scheme makes these tricks very hard and hazardous.

One possible solution is to have separate buffer for Y,U,V planes, also
QuantTable, SkippedBlocksTbl. But then we will have to manage with multiple
buffers into one mpi.

It seems like solution is worse then the problem.

>> >> Dalias already pointed that processing may not be strictly top from
>> >> bottom, may not be line, slice, or blocks based. This question is
>> >> still
>> >> open for discussion. Anyway the most flexible x,y,w,h way proved to
>> >> be
>> >> also the most hardier and totally painful. Just take a look of crop
>> >>or
>> >> expand filters in G1. More over the current G1
>> >> scheme have some major flaws:
>> >> ­ the drawn rectangles may overlap (it depends only on decoder)
>> >
>> > No, my spec says that draw_slice/commit_slice must be called exactly
>> > once for each pixel. If your codec is broken and does not honor this,
>> > you must wrap it or else not use slices.
>> The problem may arrases in filter slices too! Imagine rounding errors;)
>
> Huh? Rounding? WTF? You can't render half a pixel. If a filter is
> doing slices+resizing (e.g. scale, subpel translate, etc.) it has to
> deal with the hideous boundary conditions itself...
>

Yep, but you forget that x,y,w,h may have any values. In the ordinary way
they will be multiple of 16, but after one resize filter, this is no
longer true. And how are you gonna manage the things if you got odd
x,y,w,h in YV12 image :O

Now I believe that line counters from top to bottom is the most sane
alternative.

If the only VP3 filter draws top from bottom, then we won't process slices
for it. Only full images.
But I doubt that VfW codecs may do it too. Anyway they don't have slices.

[snip]

>> Anyway the IN_ORDER doesn't force us to display the frame.
>> There is no need to start displaying frame in the moment they are
>> compleated.
>
> Yes, but it's hard to know when to display unless you're using threads
> (or the cool pageflip-from-slice-callback hack :))
>
>> I agree that there may be some problems for vo with one buffer.
>> So far you have one (good) point.
>
> I never said anything about vo with one buffer. IMO it sucks so much
> it shouldn't even be supported, but then Arpi would get mad.
>

Same thing could be extend for decoder with n=2 static buffers at once, and
vo with only 2 buffers. Same for n=n+1;

Well after some brainstorming. I take my words back. That what decoder does
is to delay one frame by buffering it. As my whole video filter system acts
like a codec, then we should take control of the buffer delay. This could be
done by adding “global” variable LowDelay, that contains the number of
frames we need to wait before starting displaying. In MPEG-1/2/4 case it
will be 1 or 0, for h264 it may be a little bit higher ;).

It have nothing to do with buffering ahead. It is just low_delay ripped from
codec and put into video system.

>> >> As you can see it is very easy for the decoders to set the IN_ORDER
>> >> flag, it could be done om G1's decode() end, when the frames are in
>> >> order.
>> >
>> > Actually, this is totally false. Libavcodec does _not_ export any
>> > information which allows the caller to know if the frames are being
>> > decoded in order or not. :( Yes, this means lavc is horribly broken...
>> >> avcodec always display frames in order, unless you set manually flags
>> >>like
>> >>_OUT_OF_ORDER or _LOW_DELAY ;)
>
> No. Keep in mind that your chain will be running from the
> draw_horiz_band callback... (in which case, it will be out of order) I
> would expect you to set the LOW_DELAY flag under these circumstances,
> but maybe you wouldn't.
>

Yep.

>> >> If an MPI is freed without setting IN_ORDER then we could guess that
>> >> it
>> >> have been skipped.
>> >
>> > Frame sources cannot be allowed to skip frames. Only the destination
>> > requesting frames can skip them.
>> If this rule is removed then IN_ORDER don't have any meening. Usually
>> filter that makes such frames is broken. If a filter that wants to
>> remove
>> dublicated frames may set flag SKIPPED (well if such flag exists;)
>> SKIPPED/INVALID is requared because there are always 2 mpi's that point
>> to
>> one buffer (vf1->out and vf_2->in )
>

> I misunderstood IN_ORDER. SKIPPED makes sense now, it's just not quite
> the way I would implement it.
>
>> >> Skipping/Rebuilding
>> >
>> > This entire section should be trashed. It's very bad design.
>> did i said somewhere - not finished?
>
> Yes. IMO it just shouldn't exist, though. It's unnecessary complexity
> and part requires sacrificing performance.
>

Not really. When rebuild request appears, a filter may ignore it and
skip the frame instead of processing it.
I will try to add new example.

Let say that I liked your idea for aspect processing. There is only one SAR
(sample aspect ratio). We started decoding and we have displayed some
frames. But the user decide that (s)he doesn't like the resolution and
switch it (like vo_sdl have key for switching resolution). Now, the DAR is
changed (e.g. From 4:3 to 16:9). This mean that the SAR of the image should
be changed. In the usual case all images and buffers should be flushed.
Including some of the buffers in temporal filters. In other words we have to
“seek” or to start building video chain again (e.g. vd_ffmpeg::init_vo).


This what I do, is ability for the filter to decide if he can and want to
start with the new parameters or simply to skip frames emulating “seek”
flush.

As you may have guessed the aspect is transferred by get_image and stored in
MPI.

There is another side effect – at the same time there may be images with
different aspects.

>> >> Now the skipping issue is rising. I propose 2 flags, that should be
>> >> added like IN_ORDER flag, I call them SKIPPED and REBUILD. I thought
>> >> about one common INVALID, but it would have different meaning
>> >> depending from the array it resides (incoming or outgoing)
>> >> SKIPPED is requared when a get_image frame is gotten but the
>> >> processing is not performed. The first filter sets this flag in the
>> >> outgoing mpi, and when next filter process the date, if should free
>> >> the mpi (that is now in the incoming). If the filter had allocated
>> >> another
>> >> frame, where the skipped frame should have been draw, then it can
>> >> free it by setting it as SKIPPED.
>> >
>> > Turn things around in the only direction that works, and you don't
>> >need an image flag for SKIPPED at all. The filter _requesting_ the image
>> > knows if it intends to use the contents or not, so if not, it just
>> > ignores what's there. There IS NO CORRECT WAY to frameskip from the
>> > source side.
>> I'm not talking about skipping of frame to maintain A-V sync.
>
> OK, misunderstood.
>
>> And decoders are from the source side, they DO skip frames. And in this
>> section I use SKIPPED in meaning of INVALID, as you can see from the
>> quote.
>
> I couldn't tell. If the codec skipped a frame at user-request, it
> would also be invalid...
>
>> how many scaling filters are you planing to have? don't you know that
>> scale filter is slow?
>
> Yes, it's slow. vo_x11 sucks. My point is that the player should
> _never_ automatically do stuff that gives incorrect output.
>

Yep. That's why this rebuild/skipped/invalid mubo-jumbo is about. The
filters should know better if they can recreate frame or better to skip it.

The problem is that I do out-of-order rendering, that in main case mean that
I have some frames that I have processed, but they ar. When
something changes, I need to rebuild these frames. If I can't I should not
display them - skip.



>> > Bad point 2: your "rebuild" idea is not possible. Suppose the scale
>> > filter has stored its output in video memory, and its input has
>> > already been freed/overwritten. If you don't allow for this,
>> > performance will suck.
>> If you had read carefully you would see that I had pointed that problem
>> too (with solution I don't like very much). That's the main reason this
>> section is not completed.
>
> :((
>
>>
>> >
>> >> [...]
>> >> -vf spp=5,scale=512:384,osd
>> >> [...]
>> >> Now the user turns off OSD that have been already rendered into a
>> >> frame. Then vf_osd set REBUILD for all affected frames in the
>> >> incoming array. The scale filter will draw the frame again, but it
>> >> won't call spp again. And this gives a big win because vf_spp could be
>> >> extremely slow.
>>>
>>> This is stupid. We have a much better design for osd: as it
>>> slice-renders its output, it makes backups (in very efficient form) of
>>> the data that's destroyed by overwriting/alphablending. It can then undo
>>> the process at any time, without ever reading from its old input buffers
>>> or output buffers. In fact, it can handle slices of any shape and size,
>>> too!
>> OSD is only EXAMPLE. not the real case.
>> Well then I had gave bad example. In fact REBUILD is necessary then
>> filter uses a buffer that is requested by the previous filter. Also if vo
>> invalidate the buffer by some reason, this is the only way it could
>> signal the rest of the filters.
>
> Invalidating buffers is a problem...
>
>> Yeh, these issues are raised by he way i handle mpi/buffer, but I have
>> not
>> seen any such system so far. Usually in such situation all filters will
>> get something like reset and will start from next frame. Of course this
>> could be a lot of pain in out-of-order scheme!
>
> It's not really too bad. Although ideally it should be possible to
> make small changes to the filter chain _without_ any discontinuity in
> the output video...
>
>> > This is an insurmountible problem. The buffers will very likely no
>> > longer exist. Forcing them to be kept will destroy performance.
>> You mean will consume a lot of memory?
>> huh?
>
> No. You might have to _copy_ them, which kills performance. Think of
> export-type buffers, which are NOT just for obsolete codecs! Or
> reusable/static-type buffers!
>
Well I don't think that we have to copy them. If they are no longer
available then we cannot do anything else than skip the frame instead of
rebuilding it.


>> >> 1. Interlacing ­ should the second field have its own PTS?
>> >
>> > In principle, definitely yes. IMO the easiest way to handle it is to
>> require codecs that output interlaced video to set the duration field,
>> and then pts of the second field is just pts+duration/2.
>> Why? Just because you like it that way?
>
> Yes. Any other way is fine too. Unfortunately it's impossible to
> detect whether the source video is interlaced or not (stupid flags are
> always wrong), so some other methods such as always treating fields
> independently are troublesome...
>
>> > Then don't send it to public mailing lists... :)
>> The author is never limited by the license, I own full copyright of this
>> document and I may set any rules on it.
>
> Yes but you already published it in a public place. :)
>
I can do it. I'm the author. I own the copyright
You can not do it. You are not the author.
Well I guess that this may prevent you from quoting me in the maillist.
But I said that it is for mplayer developers eyes only, so as long as there
 are no users in this list you may quote me ;)
Will write a better license next time:)

>> > So, despite all the flames, I think there _are_ a few really good ideas
>> > here, at least as far as deficiencies in G1 (or even G2 VP) which we
>> > need to resolve. But I don't like Ivan's push-based out-of-order
>> > rendering pipeline at all. It's highly non-intuitive, and maybe even
>> > restrictive.
>> Huh, I'm happy to hear that there are good ideas. You didn't point
>> anything good. I see only critics&flames.
>
> Sorry, I wasn't at all clear. The best ideas from my standpoint were
> the ones that highlighted deficiencies in my design, e.g. the
> buffers-from-multiple-sources thing. Even though I flame them, I also
> sort of line your slice ideas, but basically every way of doing slices
> sucks... :(
>
> Another thing is the rebuild idea. Even though I don't see any way it
> can be done correctly with your proposal, it would be nice to be able
> to regenerate the current frame. Think of a screenshot function, for
> example.
>

There is an easier way to make a screen shot ;)
you just need a split filter that uses DR for one of the chains.
This filter will do no drawing as it always do DR.
When A user wants a screen shot, the filter should copy the current frame
and pass it on the second chain that ends with vo_png or something like it.
Even scale filter may be auto-inserted into the second chain ;)

>> > Actually, the name (VO3) reflects what I don't like about it: Ivan's
>> > design is an api for the codec to _output_ slices, thus calling it
>> > video
>> > output. (In fact, all filter execution is initiated from within the
>> > codec's slice callback!)
>> This is one of the possible ways. In the vo2 drafts I wanted to
>> implement
>> something called automatic sliceing- forcing filters to use slices even
>> when decoder doesn't support slicing. (I can nearly imagine the flames
>> you are thinking in the moment;)
>
> I understand what you're saying. I'm just strongly opposed to the main
> entry point being at the codec end. In particular, it does not allow
> cpu-saving frame dropping. Only in a pull-based system where you wait
> to decode/process a frame until the next filter wants it can you skip
> (expensive!) processing (or even decoding, for B frames!) based on
> whether the output is destined for the encoder/monitor or the
> bitbucket...
>
> Ultimately, slice _processing_ isn't very friendly to this goal. The
> more we discuss it, the more I'm doubting that slice processing is
> even useful. On the one hand it's very nice, for optimizing cache
> usage, but on the other, it forces you to process frames before you
> even want them. This is a _big_ obstacle to framedropping, and to
> smooth playback, since displaying certain frames might require no
> processing, and displaying others might require processing 2 or 3
> frames first... :((
>
>> Anyway my API makes all filter codecses. That's why the scheme looks so
>> complicated, and that's why simple filter is so necessary. The full
>> beauty
>> of the API will be seen only for people that make temporal filters and
>> adding/removing frames. This mean by you :O
>
> Perhaps you could port vf_pullup to pseudocode using your api and see
> if you could convince me?
>
>> > On the other hand, I'm looking for an API
>> > for _obtaining_ frames to show on a display, which might come from
>> > anywhere -- not just a codec. For instance they might even be
>> > generated by visualization plugins from audio data, or even from
>> /dev/urandom!
>> Oh, Could you explain why mine API cannot be used for these things?
>
> It's _called_ from the codec's draw_slice! Not very good at all for
> multiple video sources, e.g. music + music video + overlaid
> visualization.
>
>> > So, Ivan. I'll try to take the best parts of what you've proposed and
>> > incorporate them into the code for G2. Maybe we'll be able to find
>> > something we're both happy with.
>> Wrong, We need something that we both are equally unhappy with:)))
>
> Yes...
>
>> But as far as you code it is natural you to implement your ideas.
>
> Yes again.
>
> So, now let me make some general remarks (yes, this is long
> already...)
>
> After this email, I understand your proposal a lot better. The big
> difference between our approaches is that I treat buffers (including
> "indirect" buffers) as objects which filters obtain and hold onto
> internally and which they only "pass along" when it's time to display
> them, while you treat buffers as entities which are carefully managed
> in a queue between each pair of filters, which can be processed
> immediately, and which are only "activated" (IN_ORDER flag) when it's
> actually their time.
>
> Here are some things I like better about your approach:
> - It's very easy to cancel buffers when unloading/resetting filters.
> - Buffer management can't be 'hidden' inside the filters, meaning that
> we're less likely to have leaks/crashes from buggy filters.
> - Processing can be done in decoding order even when slices aren't
> supported (dunno whether this actually happens).
> - Slices are fairly restricted, easing implementation.
>
> And here are some things I really don't like about your approach:
> - It's push-based rather than pull-based. Thus:
> - No good way to handle (intentional) frame dropping. This will be a
> problem _whenever_ you do out-of-order processing, so it happens
> with my design too. But the only time I do OOO is for slices...

Actually there is a way and it works the very same way
> - Slices are fairly restricted, limiting their usefulness.
> - Having the chain run from the decoder's callback sucks. :(
> - It doesn't allow "dumb slices" (small reused buffer).
> - It doesn't have a way to handle buffer age/skipped blocks (I know,
> my design doesn't solve this either...)~:
> - My YV12->NV12 conversion might not be possible with your buffer
> management system...?
>
> Now that my faith in slices has been shaken, I think it would be
> really beneficial to see some _benchmarks_. Particularly, a comparison
> of slice-rendering through scale for colorspace conversion (and
> possibly also scaling) versus non-slice. If slices don't help (or
> hurt) for such complex processing (due to cache thrashing from the
> scale process itself), then I would be inclined to throw away slices
> for everything except copying to write-only DR buffers while
> decoding... On the other hand, if they help, then they're at least
> good for in-order rendering (non-B codecs).
There is nice cache examination tool in valgrind debugger.
If I remember right last time I run it i got about 30% cache hits.
Anyway in general case they give 5-10% speed up. And this by using them
only for B-Frames (in-order)
I think that you can imagine the speed-up;)))

>
> And Ivan, remember: a (pseudocode?) port of vf_pullup to your layer
> might be very useful in convincing me of its merits or demerits. Also
> feel free to criticize the way pullup.c does all its own internal
> buffer management -- perhaps you'd prefer it obtain buffers from the
> next filter. :) This can be arranged if it will help.
Will Do it later. (in next mail:)
I'm first gonna add few simple example filters ;)

>
> As much as I dislike some of your ideas, I'm open to changing things
> to be more like what you propose. I want G2 to be the best possible
> tool for video! And that matters more than ego/flames/eliteness/etc.
> Maybe you'll even get me to write your (modified) design for you, if
> you come up with convincing proposals... :))
>
> Rich
>
As a whole I my current design have only sense in using slices.

The whole concept is to process the data after it have been decoded and
while it is still in the cache. For this you need 2 things – to process the
data in the order it is decoded (out-of-order is in fact decode order), and
to process it in small peaces that fit into cache, but not too small to
avoid function call overhead.

Best Regards
   Ivan Kalvachev
  iive




More information about the MPlayer-G2-dev mailing list