[MPlayer-dev-eng] Colorspace conversion and image buffer format

Arpi arpi at thot.banki.hu
Mon Mar 4 22:19:18 CET 2002


Hi,

> > > I'm writing a native decoder for HuffYUV (fourcc HFYU).
> > > Image format can be RGB, RGBA (RGB plus alpha channel) and YUY2 (it
> > > will use new output format selection when it will be available).
> > Do you plan to do it for new libmpcodecs API, or for old one for now?
> I'm writing it for the new API, but I'll do both if needed: new 
> interface and decoding functions are in two files at the moment.
I want to finish moving to new if ASAP, but unfortunatelly i have too much
work nowdays :(
Anyway I don't see sense of supporting old interface in new codecs.

> > Does it really uses alpha channel? For what ?
> I don't know. Original source code offers this method. I'll ignore 
> alpha and print a warning.
ok

> > "(every frame is independent from others)."
> > 
> > MP_IMGTYPE_IP is the right format for codecs using prediction.
> Sorry. Bad wording on my side. Frames are independent. I need to read 
> data from the buffer I'm writing to, not from the previous frame. It's 
> something like this:
> 
> Row  n : .........abcdef.........
> Row n+1: .........ghiP
> 
> To decode pixel P I need to access pixels c, d and i of current frame.

ok, i see.
so, you're right, TEMP + READABLE flag does this.

> Can I read them back from image buffer, or should I keep them in a 
> private buffer? If I can read them from image buffer, do I have to 
> request a readable buffer with MP_IMGFLAG_READABLE ? Are MP_TEMP 
> buffers write only (or readable, but very slow)?
it depends on READABLE flag. if it's set, it shouldn't be placed in video
memory, so it's fast for reading.

does this in-frame prediction mean only 2 lines of pixels?
it may worth to keep these in a small private buffer, for much better cache
usage and allowing TEMP buffers to be placed into slow writeonly memory.

> > > Other flags used are MP_IMGFLAG_YUV for YUY2 case and also
> > > MP_IMGFLAG_PLANAR for YV12 case.
> > You don't need these, they are set by mp_get_image()
> Now I'm starting to understand how it works. So, to request a YUV or 
> RGB buffer I'll have to call mpcodecs_config_vo(), it will set 
> sh_video->outfmtidx (I guess, it's notimplemented yet) and them 
> mp_image_setfmt will choose the right buffer format. Is it right?
yes.
as soon as you know the list of available output formats and image
dimensions, you should call mpcodecs_config_vo(). it will initialize -vo
driver (may use vd->control() calls to query/change outfmt).

then you can start actual decoding, by calling mp_get_image() once for each
frame. it will return allocated buffer (except if you reqested buffer type
EXPORT) you can render into. It will try to place this buffer to vo's
memory, if possible (depending on buffer type and stride limitations).

when we moved all codecs to new interface, i can remove old one, and start
implementing stuff like filter layer (intelligent colorspace
selection/conversion, common postprocessing, sw scaling etc) and direct
rendering with any codecs and vo.

> > But, if possible, make possible to set stride for frame, so add flag
> > MP_IMGFLAG_ACCEPT_STRIDE and us ethe mpi->stride[] value instead of
> > width
> > when calculating buffer addresses. Direct rendering usually have
> > limited
> > stride, so this feature is mostly required. And it also requires for
> > tricks
> > like crop/expand frame, i think it's usefull for mencoder.
> Ok. I'll do.
thx

A'rpi / Astral & ESP-team

--
Developer of MPlayer, the Movie Player for Linux - http://www.MPlayerHQ.hu



More information about the MPlayer-dev-eng mailing list