[MPlayer-G2-dev] Re: OSD, again :)

Alban Bedel albeu at free.fr
Mon Jul 21 12:00:41 CEST 2003


Hi Arpi,

on Mon, 21 Jul 2003 03:06:50 +0200 you wrote:

> Hi,
> 
> > > Hmm, interesting code, but far from perfect... :)
> > Sure :)
> > 
> > > Actually a better description of how objects are related and what is
> > > the purpose of them, maybe telling me/us how does rendering
> > > subtitles and osd symbols will eb done using this API would explain
> > > it better than RTFS.
> > hmm, RTF vo_osd.c :) A display is createad/exported by vf/vo/whatever
> > (if needed it add some area). Finnaly the clients (main app, subtitle
> > renderer, etc) create the objects they need and attach them to the
> > display. The clients should only deal with the objects. Settings
> > specific to an object type (like setting the current string to
> > display) will probably go through control.
> > 
> > > > The system is based on what i called "osd display". The display
> > > > role is to manage draw/clear of the various objects currently
> > > > attached to the display. Once a display is configured then you
> > > > just feed it with some mp_image and it will draw the objects.
> > > > The "osd display" doesn't do any drawing/clearing. It just take
> > > > care of what have to be done, draw are handled by the object
> > > > themself, and clear by a callback.
> > > > The system make the drawing/clearing decisions upon the "buffer
> > > > type". The display itself have a buffer type but it's also
> > > > possible to define some area with a different buffer type. There
> > > > is 3 buffer types :
> > > 
> > > I like the basic idea, but:
> > > - it shouldn't mess with mpi. NEVER!
> > >   it will be used on non-mpi surfaces too, like vo buffer, lcd
> > >   device or spu encoder, or vo_dvb's osd buffer.
> > This is discutable :) mpi is nice atm bcs it can handle pretty much
> > any kind of data, and we alredy have some useful helper for it :)
> > Anyway if we darsticly reduce the number of colorspace then, yes
> > mpi doesn't make that much sense. However if draw might be done in
> > planar YUV i still vote for the mpi bcs it's nice for the object
> > writer.
> 
> i want to keep those layers independent, so not using vf/mpi functions
> in osd and vo etc.
> actually i put vf to the top, nothing outside of video/ should depend on
> it. vf interfaces (via wrappers) to vd, vo, osd.
Nothing in the current osd code depend on vf, beside the use of 2 headers
imagefmt.h and mp_image.h. Anyway i don't plan to keep on using mpi,
don't worry :)

> as you saw, i even want to handle vo-vo connection (parent+child) in
> vf_vo2 instead of the vo drivers, to keep the rest simple.
> 
> ok maybe i move too much code to vf, but i have some (arguable) reason:
> i know the vf layer very well, and if we can keep all the rest (vd, vo,
> osd etc) simple (atomic) then we don't have to change APIs, fix bugs,
> add features etc at many places.
> 
> if you remember :), in g1 we couldn't do many API changes (or it was
> painful and buggy) due to complex code everywhere includeing the atomic
> modules/plugins.
i do remember that pain ;)

> my design "idea" for g2, to keep every layer simple, atomic, and add
> complex wrappers to handle the mess.
I know and i want the same.

> > > - there should be per-area (and not per-display) draw & clear
> > > functions
> > >   (the purpose of multiple areas in a display to handle areas with
> > >   different behaviour - so allowing different funcs is trivial)
> > Draw is currently done directly by the object, clear is per display.
> 
> I saw, but i dont like it
> for example, the DVB card (and the hauppauge PVR 250/350 too) have
> object-oriented OSD, so you can define small areas with 1/2/4/8 bpp
> resolution (paletted rgb+alpha) and put osd there. the total memory
> usage of these areas is very limited :(
Ok, i wasn't aware of that. On the other hand most of the other stuff
on wich osd will drawn don't behave like this. Also this imply
a completly different way of managing the objects. I need to think
more about these :)

> > > - instead of TEMP/STATIC/LOCKED types (btw why is buffer_type in
> > > both
> > >   display and area structs? i guess the display one is
> > >   obsolete/unused?)
> > It's the type of the "root", ie object wich doesn't fall in any area.
> 
> imho it's nonsence.
> the 'root' (space uncovered by areas) should be forbidden, ie never draw 
> there. or do you want to allow overlapping areas, ie. root drawable with
> a hole in the middle defined by an 'locked' type area? doesnt worth it.
imho forbiding the root as a whole is none-sense. Take the common case
where you have some black space around the movie. An area define the space
where the movie is draw. Unless you are in the rare case that the movie
need a temp buffer you have to handle stuff draw on movie differently
than those on the black parts.
But if you forbid draw on the root then what can you do ? Define 4 areas
around the movie ??? That doesn't make more sense.

The idea behind area is just to define (if at all needed) how object
(or part of object) found in a particular part of the display must be
cleared/drawn.

> imho handling root specially just makes osd code a bit more compelx,
> with no real value.
I think you didn't understood that these "area" are only hints so to
say. These hints are used by the display to properly clear/draw object.
Such thing is imho requiered if the system must handle more complex
case than just a movie wich take the whole screen.

> > >   there should be save and restore modes, describing how to do
> > >   these:
> > >    save modes:
> > >     - just draw (it's TEMP)
> > >     - save original pixels before draw
> > >       (if the osd object doesnt support this, we should save whole
> > >       area)
> > >    restore modes:
> > >     - no need to restore
> > >     - fill with given color
> > >     - call external clear function
> > >     - restore saved original pixels
> > >    maybe it's better to combine single mode parameter... so:
> > yes
> > >     - just draw (TEMP)
> > >     - save & restore original pixels
> > >     - fill with given color
> > >     - call external save & restore functions
> > ok
> > 
> > > - rendering: we should add a generic luma+alpha osd renderer (like
> > > in g1), so
> > >   simple osd objects could use that. supporting many pixel formats
> > >   in every osd obj is nonsence and big work. 
> > I think you'r right on this. And anyway we need to cache the drawing
> > so
> 
> yes
> 
> > it doesn't really make sense to draw directly on the target buffer.
> > 
> > >   although this renderer should
> > >   optinally support colors. this renderer could do optimized orig.
> > >   pixel saving/restoring too (based on alpha value).
> > So we must define wich colorspace we support. Obviously 8bit gray +
> > 8bit alpha like in g1, 8bit char for ASCII. For color dunno, 24bit RGB
> > + 8bit alpha, or do we go for YUV + alpha too ?
> 
> i agree with gray+alpha bitmap, with global color parameter to renderer
> (so you can render colored text as many ppl wanted earlier)
Good idea.

> for text, i vote for 16bit or 32bit unicode.
> there are chinese and japanise mplayer users too :)
Ok. But afaik text display only use 8bit char, dunno how it is for
chinese. That would be a bit stupid if 90% of the user end up
with c char -> unicode -> c char.
 
> and we have to pass unicode to text->bitmap renderer, so unless you want
> to do codepage conversion inside the osd object, it's better to passthru
> as-is.
> 
> for color bitmaps, probably rgba32 (with alpha) and a 8bpp palettized
> format should be used. we only need yuv overlay for SPU subtitles, and
> they are already palettized things. also palette allows several
> optimizations in renderer, and assist hardware osd (spu, dvb) better.
> 
> the palette could come in yuv or rgb, doesnt really matter (easy to
> convert).
> 
> it's already 4 -> many converters...
yes :)

> > >   Maybe a generic font rendering engine could help a lot, so it
> > >   could cache up font bitmaps, saher between osd objects, etc etc
> > That would be nice. I have to look but i think freetype have some
> > caching stuff. That's perhaps enouth ?
> 
> probably it's better to cache the alphamapped chars (remember the
> runtime generated outlining+blur alphamaps)
> 
> imho it would be nice to have some char (and/or string) rendering
> function, and also some draw_box style thing in the osd core to help the
> osd objects.
Yes, would be nice :)

> > > - what about the spu and text (ascii) output/passthru ?
> > Text output make only sense when the object drawing function support
> > it imho. For the rest we can try lib aa,conversion but i doubt it will
> > really work :)
> > 
> > For spu i don't really know. We must output the osd to an encoder
> > and then pass the coded stuff to the vo. So we probaly need yet
> > another interface for these encoders.
> 
> No real need for new interface, we'll define a way to pass osd data
> (remember the external draw function) in ascii or gray+alpha format to
> the vo (via DRAW_OSD-like controls/functions) so SPU-capable vo
> driver(s) can call the encoder and pass teh data. Afaik only dxr3 can
> handle SPU. DVB and PVR have their own OSD pixel formats. AAlib handles
> text.
Imho vo shouldn't have to use the spu encoder themself. That's why i
talked about another interface.

> In this case the vo driver will create the osd display, set format to
> what it can accept.
For SPU it's probably better and simpler to pass the encoded data
to the vo directly.

> > For real passthru (ie only dvd subs "as is") we just get rid of all
> > this crap and pass the packets from the demuxer right to the vo :)
> > Ok, we need to process these packet a bit before passing them to the
> > vo ircc. But imho i don't think that we should bother with that here.
> 
> i tend to agree...
> maybe we could allow passthru (not using osd layer at all) or decoding,
> osd layer stuff, and finally encoding. thsi way users could add osd
> symbols and other stuff to the spu with relative low cpu usage.
> > > - how to pass text to be displayed to the osd objects from outside
> > > (eg. UI) ?
> > control :) A more important question is how will it iteract nicely
> > with the subtitle chain ?
> 
> it's the job of the UI and the a-v sync playback loop/core.
> UI can define where (osd object) to render the subtitles, and the A-V
> sync stuff will lookup the right subtitle (from demuxer) and pass to the
> osd.
As long as there is no subtitle filter, that's ok.

> > Well the subtitle chain is still to be defined ;)
> 
> i'm not sure if we need subtitle decoders/filters now.
> esp. that the spu/ascii/bitmap conversion will be handled by osd/vo
> stuff.
Atm there is no need for subtitle filtering but we have to keep it in
mind ;)
	Albeu

PS: I'm going to .nl and .fr for holidays today, so my next reply
    is in 3 weeks :)
-- 

Everything is controlled by a small evil group
to which, unfortunately, no one we know belongs.




More information about the MPlayer-G2-dev mailing list