[MPlayer-dev-eng] DECODING AHEAD - (Initialy Michael's idea)

Arpi arpi at thot.banki.hu
Tue Feb 26 20:08:32 CET 2002


Hi,

> 	Rewritted ? What parts should be rewritten ?
> 	I don't have a single idea of direct rendering but I think you're
> trying to avoid a memcpy somewhere, in that case it would be very
> difficult with current libavcodec, and maybe the speed gain will not
> worth the work. I don't know DR buffers but I suppose that they are
> completely different in every card I suppose you should pass the
> buffers, strides and offsets and libavcodec would have to worry to put
> the data on this probably scattered buffer.

Ok. It seems none of you really knows what direct rendering means...
I'll try to explain now! :)

At first, there are 2 different way, both called direct rendering.
The main point si teh same, but they work different.

method 1: decoding directly to externally provided buffers.
so, the codec decodes macroblocks directly to the buffer provided by teh
caller. as this buffer will be readed later (for MC of next frame) it's not
a good idea to place such buffers in slow video ram. but.
there are many video out drivers using buffers in system ram, and using some
way of emmcpy or DMA to blit it to vieo ram at display time.
for example, Xv and X11 (normal and Shm too) are such thingie.
XImage will be a buffer in system ram (!) and X*PutImage will copy it to
video ram. Only nvidia and ati rage128 Xv drivers use DMA, others just
memcpy it. Also some opengl drivers (including Matrox) uses DMA to copy from
subteximage to video ram.
The current mpalyer way mean: codec allocates soem buffer, and decode image
to that buffer. then this buffer is copied to X11's buffer. then Xserver
copies this buffer to video ram. So one more memcpy than required...
direct rendering can remove this extar memcpy, and use the Xserver's memory
buffers for decoding buffer. Note again: it helps only if the external
buffer is in fast system ram.

method 2: decoding to internal buffers, but blit after each macroblocks,
including optional colorspace conversion.
advantages: it can blit into video ram, as it keeps hte copy in its internal
buffers for next frame's MC. skipped macroblocks won't be copied again to
video ram (except if video buffer address changes between frames -> hw
double/triple buffering)
Just avoiding blitting of skipped MBs mean about 100% speedup (2 times
faster) for low bitrate (<700kbit) divxes. It even makes possible to watch
VCD resolution divx on p200mmx with DGA.
how does it work? the codec works as normally, decodes macroblocks inti its
internal buffer. but after each decoded macroblock, it immediatelly copies
this macroblock to the video ram. it's in the L1 cache, so it will be fast.
skipped macroblocks can be skipped easily -> more speedup.
but, as it copes directly to video ram, it must do colorspace conversion if
needed (for example divx -> rgb DGA).
another interesting question of such direct rendering is the planar formats.
Eugene K. of Divx4 told me that he experienced worse performance blittig
yv12 blocks (copied 3 blocks to different (Y,U,V) buffers) than doing
(really unneeded) yv12->yuy2 conversion on-the-fly.
so, divx4 codec (with -vc divx4 api) converts from its internal yv12 buffer
to the external yuy2.

libmpeg2 already uses simplified variatn of this: when it finish decoding a
slice (a horizontal line of MBs) it copes it to external (video ram) buffer
(using callback to libvo), so at least it copies from L2 cache instead of
slow ram. it gave me 23% -> 20% VOB decoding speedup on p3.

so, again: the main difference between method 1 and 2:
method1 stores decoded data only onve: in the external buffer.
method2 stores decoded data twice: in its internal buffer (for later
reading) and in the write-only slow video ram.

both methods can make big speedup, depending on codec behaviour and libvo
driver. for example, IPB mpegs could combine these, use method 2 for I/P
frames and method 1 for B frams. mpeg2dec does already this.
for I-only type video (like mjpeg) method 1 is better. for I/P type video
with MC (like divx, h263 etc) method 2 is the best choice.
for I/P type videos without MC (like FLI, CVID) could use method 1 with
static buffer or method 2 with double/triple buffering.

i hope it is clear now.
and i hope even nick understand what are we talking about...

ah, and at the end, the abilities of codecs:
libmpeg2: can do method 1 and 2 (but slice level copy, not MB level)
vfw: can do method 2, with static external buffer
dshow: can do method 2, with static or variable address external buffer
odivx, and most opensource codecs like fli, cvid: can do method 1
divx4: can do method 2 (with old odivx api it does method 1)
libavcodec, xanim: they currently can't do DR, but they exports their
internal buffers. but it's very easy to implement menthod 1 support,
and a bit harder but possible wihtout any rewrite doing method 2.

so, dshow and divx4 already implements all requirements of method 2.
libmpeg2 implements method 1, and its' easy to add to libavcodec.

anyway, in the ideal world, we need all codecs support both methods.
anyway 2: in ideal world, there are no libvo drivers having buffer in system
ram and memcpy to video ram...
anyway 3: in our really ideal world, all libvo driver has its buffers in
fast sytem ram and does blitting with DMA... :)


A'rpi / Astral & ESP-team

--
Developer of MPlayer, the Movie Player for Linux - http://www.MPlayerHQ.hu



More information about the MPlayer-dev-eng mailing list