[MPlayer-G2-dev] vp layer and config

D Richard Felker III dalias at aerifal.cx
Thu Dec 18 01:04:49 CET 2003


Time to address what's been solved and what hasn't:


On Mon, Dec 15, 2003 at 10:13:02AM +0100, Arpi wrote:
> some issues to solve:
> - runtime re-configuration (aspect ratio, size, stride, colorspace(?) changes)

There are two issues here:
1. New cfg-system configuration from the user/calling app.
2. New config() from the previous filter/codec.

Actually from an implementation standpoint, both are fairly similar. A
given node in the pipeline should be able to support either, both, or
neither type of reconfiguration.

If a node does not support reconfiguration, the vp layer should close
and reopen it with the new configuration.

If a reopen will cause a discontinuity in the video (think temporal
filters) then the vp layer should instead insert conversion filters to
avoid reconfiguration, if possible.

Summary: NOT done. Some of this is outside the realm of the vp layer,
and more related to config layer, so I'm willing to ignore it for now.

> - aspect ratio negotation through the vf layer to vo
>   (pass thru size & aspect to the vo layer, as some vo (directx, xv) doesnt
>   like all resolutions)

I'm not quite sure what this one means. IMO passing the sample aspect
ratio all the way through the chain handles all problems. The vo
driver is free to display the video at wrong aspect (if it's not
capable of scaling) or reject the config request and force software
scaler to get loaded to fix aspect.

> - window resizing issue (user resizes vo window, then reconfigure scale
>   expand etc filters to produce image in new size)

Addressed in former post. Basic idea is that the app/interface should
insert filters and control them directly itself, rather than trying to
pass messages by chain.

> - better buffer management (get/put_buffer method)

Done, mostly. ;)

There are some nasty issues of how to clean up when closing a node
while another node still has locks on its buffers, which need to be
worked out.

> - split mp_image to colorspace descriptor (see thread on this list)
>   and buffer descriptor (stride, pointers), maybe a 3rd part containing
>   frame descriptor (frame/field flags, timestamp, etc so info related to
>   the visual content of the image, not the phisical buffer itself, so
>   linear converters (colorspace conf, scale, expand etc) could simply
>   passthru this info and change buffer desc only)

Done.

> - correct support for slices (note there are 2 kind of strides: one
>   when you call next filter's draw_slice after each slice rendering
>   to next vf's buffer completed, and the other type is when you have
>   own small buffer where one slice overwrites the previous one)

Mostly done. See slices thread regarding various issues on slice
restrictions which we may need to consider.

> - somehow solve framedropping support
>   (now its near impossible in g2, as you hav eto decode and pass a
>   frame through the vf layer to get its timestamp, to be used to
>   decide if you drop it, but then it's already too late to drop)

If duration is valid, you have an easy solution! Present frame's
duration is next frame's rel_pts!

Otherwise you're pretty much out of luck. It would be possible to put
a framedropping filter at an arbitrary place in the video pipeline and
have the calling app control it, but I expect bad results.

IMO, the best solution is to use duration when it's valid, and
otherwise, wait to start dropping frames until we're already behind
schedule.

So, I consider this issue closed. :)

> i think the new vf layer is the key of near everything.

I still like this line. :))))))))

Rich




More information about the MPlayer-G2-dev mailing list