[MPlayer-G2-dev] Re: Subtitles demuxer
Arpi
arpi at thot.banki.hu
Thu Jul 10 18:20:09 CEST 2003
Hi,
> > > I started to do some g2 hacking with $SUBJ. I added subtitles stream
> > > to the demux_find_streams function and wrote a simple mpsub demuxer.
> > >
> > > To test that i added a quick hack to test-play to show the subs on the
> > > terminal. Sadly i have atm no good movie / subtitle combination
> > > to do proper test about the sync but it seems to be ok.
> > >
> > > The demuxer don't do autodetection atm so you have to force it :
> > > test-play -sub subfile -subdemuxer txtsub
> >
> > the patch looks ok, i'll apply
> >
> > > Now this is only a quick hack to have some thing comming out of the
> > > demuxer level. I'd like to know what are the plans for the subtitles
> > > decoding/filtering/output. If there are any ;)
> >
> > plans: not really, i have only ideas.
> > someone really should sit down and design the API for subtitle/osd
> > stuff. maybe these 2 (osd and subs) shouldnt be mixed (keep them as 2
> > independent layer).
> I think so too.
>
> > what do we need:
> > - define subtitle stream formats (like ascii, html?, spu etc)
> Yes. Most text stuff won't need much info,but stuff like spu need
> resolution, colorspace, extra header and so on. While looking at
> sh_video i think 90% of the fields present there might be useful
> for subtitles too.
imho its not a good idea to share that struct, we should define sh_subtitle
then
> > - maybe define subtitles decoder/filter/encoder layer (like vd/vf/ve in
> > video/)
> > sub filters may do things like subdelay/fps change, codepage
> > conversion, word-wrapping etc. en/decoders may do spu<->bitmap
> > conversion, text->bitmap->spu, spu->ocr->text :)
> Ok glad to see that we are thinking about the same thing :)
:)
> Beeing able to use standard vf would be nice, alpha would be nice too.
disagree
mixing vf layer into subtitles is a very bad idea imho
and anyway most vf filters are useless for subtitles, while it adds extra
complexity to vf's
> > we also need to define the OSD api. it is harder.
> > wanted features, ideas:
> > - allow several independent osd "zones".
> > for example it could allow putting osd (including subs) bellow & above
> > the video image, but not over it, thus allowing full DR.
> > or it could allow OSD to external LCD device or second display (for
> > example playback video to TVout but display OSD on monitor/gui/lcd)
> > or it could define osd zone over the image, and another bellow the
> > image, so the over-image one will be scaled by BES (it could be used
> > to display progress bar/volume symbols etc) the bellow one (unscaled)
> > could be used for nice subtitles.
> With LCD do you mean the graphic or text devices ?
i meant graphic, but now so you mentioned, we should think of text ones too
> > - it should support differential remove/draw, like in g1, so devices
> > with
> > hardware osd but slow osd-ram access (like DVB or PVR-350) could
> > benefit.
> > - it should allow full control over osd to the UI, so UI can put its own
> > messages. it would be better if the UI moves the subtitle osd to the
> > osd engine, maybe via setting a callback.
> > - support for direct rendering compressed subs, ie. direct dvd->dxr3 spu
> > transfer without decoding spu, osd engine, encoding. maybe it doesnt
> > worth the extra complexity?
> Plain text is also usable for aa lib :) So this would mean that the
> subtiles chain must be able to output to the osd engine or the vo.
or we should allow osd layer to export in multiple formats, not only bitmap
(btw it should support colors too, not only Y+A)
A'rpi / Astral & ESP-team
--
Developer of MPlayer G2, the Movie Framework for all - http://www.MPlayerHQ.hu
More information about the MPlayer-G2-dev
mailing list