[MPlayer-G2-dev] MID proposal

D Richard Felker III dalias at aerifal.cx
Sat Nov 1 20:31:44 CET 2003


On Sat, Nov 01, 2003 at 08:14:21PM +0100, Alex Beregszaszi wrote:
> Hi,
> 
> > > MID {
> > > imgfmt (?)
> > 
> > Rename to fourcc.
> Why?
> 
> We could create a new list of format fourccs which could indicate only
> one format described by MID, but the imgfmt is the ancient fourcc.
> 
> > > type (rgb or yuv)
> > 
> > Rename to colorspace or something.
> See the mail from Arpi.
>  
> > > bpp
> > > depth
> > 
> > What is the point of depth? How is it defined?
> Depth is the number of bits used for describe a pixel, bpp means the
> bytes which are used to store a pixel. RGB24 has depth=24 but can have
> both bpp=24 and bpp=32.

I know that much. But how is depth defined for YUV? Is there even any
point of knowing the depth..?

> > > fields (interlacing)
> > 
> > Hm? Does this just tell if the image is interlaced, or does it contain
> > other flags about the interlacing format?
> If we leave it in MID it could contain more flags. Dunno if it should be
> present or not.

IMO not. Info about interlacing does definitely belong in the video
pipeline, BUT most of the time it wouldn't even be known yet when
you're using MIDs, and like Arpi said MIDs should come from a table of
constants.

> > The whole width, height, x, y, w, h system is nonsense. x,y are never
> > used and not even supported. They're much better handled by just
> > adjusting the actual pointers. Also lots of filters don't understand
> > the difference between w and width, which is very silly to begin with.
> Imho it has sense, but lot of filter/vo writers are dumb and can't make
> difference.
> 
> Width/height defines the dimensions of the whole image pointed by MPI.
> x,y,w,h descibre a rectangle of which part of the image should be drawn.

But that's not how it is right now. As it stands, width is essentially
the same thing as stride/bpp, which filters should not even care
about.

> Yes, there are two ways: only pass a MPI with the actual image, but that
> would still need x,y to be defined to place it correctly on the screen.
> The other way is to leave our current scheme and fix the filters. Imho
> this is a better approach.
> 
> Probably you will ask about the sense of this whole partial mpi stuff:
> it has not much sense currently, but we could have some filters/decoders
> which will support partial decoding / filtering.

This is NOT a clean way to do it. It's very limited (forces you to
only have one rectangle) and also forces all filters to support this
bloated ugliness of working on partial images explicitly. I have a
much better design for new vp layer:

Let's say you want to blur a small rectangle of your picture. Here's
how the filter chain works:


vd ----> vf_subfilter ----> vo
            |   ^
            |   |
            V   |
           vf_blur

vf_subfilter would pull an image from vd, then pull an image from
vf_blur, which would in turn pull from vf_subfilter, getting only the
small part of the image it was supposed to work on. The blurred image
would be returned to vf_subfilter, and then vf_subfilter would finish
things up and return the final image to vo. Of course direct rendering
and/or exporting would happen all over the place, so it would be very
fast.

Anyway, IMO the x,y stuff is not acceptable. It's too much of a burden
to ask every single filter to know how to process partial images by
itself, and there's no advantage.

Rich






More information about the MPlayer-G2-dev mailing list