[MPlayer-dev-eng] libvo2

Arpi arpi at thot.banki.hu
Thu Nov 15 09:56:49 CET 2001


Hi,

> as some of you know my cvs updates were unappreciated (mainly due to 
> your idiotic mailservers taking 3 hour sto deliver mail from the cvs 
it was your, not our servers.

> mailinglist), so I got cut off from cvs access.
i can't see the connection.
it's written and you should know that how to make patches. the only
difference is that instead of diff you type cvs commit.
you shouldn't do cosmetics changes, commit files with zero changes,
commit nonworkable patches. it has nothing with your subscription at cvslog.
if you're newbie or unsure, do a cvs diff -u first, check if it's ok and
only commit then. even i do it usually before commit to avoid commiting
debug code or false comments.

> I'm not going to stop development of it because of this, but I have 
> reprioritized.
you know.

> The dxr3 users are pleased with my work in difference to you others. So 
> I'll put all effort into getting the current libvo dxr3 plugni working 
> correctly (meaning, win32 codec playback, audio and subpic playback 
> without locking video) before I go back to libvo2.
> I have created a draft of features om libvo2 and I'll post it now to get 
> some feedback before I return to develop it. Most, if not all, of the 
> things in the spec is from several developers and users on the list and 
> I don't think that many additions have been made by me, just wanted to 
> point that out so people don't start sending fan-mail to me.
> Libvo2 Core is _NOT_ a concern for device developers, the libvo2 core is 
> the glue between a device and the hard stuff (like pixel conversion, 
> scaling etc)
> I don't think it fully lists all the features since I had time to 
> implement some more before Arpi shit in his pants and closed off my cvs 
> access.
don't be so upset... *you* messed up cvs. it's fact. i suggested (and forced)
you to send patches to list instead of commiting to cvs without checking
until you read and understand cvs-howto.txt.

> I'll post some headers later on for commenting, but they are a bit far 
> from being in "feedback" mode right now. Comments ppl...
ok

> libvo2 specification draft
> 
>         Device Capabilites
>    
>     * Direct Rendering
>     * (better) Hardware decoder support, including hw decoders to have 
> access to the pts
>     * Hardware scaling, flipping, croping etc
>     * control() interface mimicing ioctl() making it easy
>       for developers to add functionality to the device interface
>       (My personal opinion is to use this interface for as many
>        functions as possible)
>     * Simpler device development by letting libvo2 core handle
>       most of the general work (like pixel conversion, scaling, etc)
> 
>         Core Capabilities
> 
>     * Checking for the availability of a vo_* device.(requiers a
>       prototype implementation in all vo_ devices)
>     * Finding fastest way of scaling, processing, format conversion
>       by scanning for available vo_ devices and their capabilites
it's partially job of mplayer core, because it needs interfacing with video
codecs for colorspace and buffering (and should be mixed with codec selection)

>       during startup. The user might want to output video using device
>       vo_X, but device vo_Y is capable of hardware scaling which vo_X
>       isn't, then vo2core will try to utilize vo_Y's scaling capabilites
>       but still rendering graphics through vo_X.
>       Say for instance you have a device in your computer containing a
>       programmable DSP (Digital Signal Processor) but this device has
>       nothing at all to do with video. Assuming someone would be
>       interested in writing microcode for the DSP to say do hardware
>       pixel conversion this device would be a very useful vo_ device
>       although it's completely uncapable of video-output.
ehh

>     * Converting codec output to the vo_ device's desired input
>       format, and if possible directly rendering it to the device's
>       buffer.
direct rendering means direct rendering :)
so the codec renders to the buffer of driver, it skips libvo2 core.
this is why i designed libvo2 driver api to provide surfaces, and leave the
rendering and handling doublebuffer etc to libvo2 or codecs.

> Additional notes
> 
> Well, I'm not that good at writing development drafts, but I hope you 
> understand
> what we are moving towards. I will supply some libvo2 headers for you to 
> get a
> deeper understanding of what is to come.
> I need a lot of feedback, good, bad, whatever.
> 
> Someone on the mplayer-users list suggested a benchmark option which 
> would try
> different combos for a specified -vo device and write a translation table
> (I'm suggesting something like ~/.mplayer/vo_<device>.translation or 
> something
> like that).
> This is a great idea, especially if I can get you to see the point in 
> providing
> vo_ devices which has nothing to do with video output but still can be 
> utilized
> for doing calculations in hardware!
what device do you think of? i can't imagine such.


A'rpi / Astral & ESP-team

--
mailto:arpi at thot.banki.hu
http://esp-team.scene.hu



More information about the MPlayer-dev-eng mailing list