[MPlayer-dev-eng] DxVA in MPlayer

Reimar Döffinger Reimar.Doeffinger at gmx.de
Sat Nov 20 14:08:59 CET 2010


On Sat, Nov 20, 2010 at 02:37:31PM +0200, Georgi Petrov wrote:
> > Um, in that case the GPU and CPU memory are likely to be shared,
> > and there shouldn't be much if any performance issue with
> > choosing the "readback" method.
> 
> This sounds logical. However, Laurent told me that using VLC
> (readback) results is a slow frame rate and poor decoding (may be only
> on his hardware), while MPC-HC is okay. Laurent, which h/w did you use
> to perform the comparison?

Be careful with the driver versions, my experience with ATI is that
the driver version can make a 4x or more speed difference with both
readback and upload speeds.

> The problem, which was mentioned are the subtitles. In case we don't
> do readback, how are we supposed to render subtitles? I'm pretty sure
> that EVR supports, how should I explain it, "surface mixing". Can't we
> just render subtitles on top of the video surface?

Yes, but I warn you, I will not agree with
1) yet another approach for rendering subtitles
2) duplicating code from existing VOs

We intend to do changes to the subtitle rendering to make it support
more things (like RGBA subtitles) and the 4 different implementations
we already have are at least 2 too much, I will not accept yet another
one (of course some parts must be reimplemented for each VO, but these
parts must be minimal).

> > Yes, but don't underestimate that if you want it to be good-quality.
> > The direct3d one did take some time to become stable, and this one
> > seems more difficult.
> > Since I do not really know anything about EVR, subtitles/OSD might
> > even be a lot more difficult...
> 
> This is true. However, I gained some skills and now I have some
> experience. I saw the EVR documentation and it doesn't seem to be much
> more different or complicated, but this is on first sight. In short -
> I will succeed, eventually.

Experience does not make testing magically faster, and performance
issue you will only notice with testing, API descriptions generally
fail to specify which functions must be fast and which not.

> > Advanced features like hardware deinterlacing may be or may become
> > available through EVR at some point and are rather unlikely to be
> > available via Direct3D. But I don't know what exactly the status
> > is there.
> 
> Yes, hardware deinterlacing is available and we can implement it by a
> command line switch for example.

There is _no_ need for anything in addition, enabling/disabling this
is already a solved problem due to VDPAU.

> The Atom h/w in question has GMA500 GPU (codename Poulsbo). It is
> capable of VLD+iMDCT+MC+LF (i.e. everything).
> 
> See: http://www.netbookmarket.net/intel-gma950-vs-gma500/

Ok, maybe I misremembered or adding it was a driver-level fix.

> About the implementation details discussed in the previous e-mail - I
> think that before starting the implementation, I can't understand the
> big picture, so it is more important to focus how to start doing it!

My opinion: start with read-back.
Add AVHWAccel to provide get_buffer and draw_slice functions, and
call them in FFmpeg at the places it would otherwise hook into the
application-provided versions for this.
Then at a readback at the end of that function.
At that point you hopefully have something working and just need to
decide on how to get it integrated into FFmpeg.
Possibly this can even be done in a way those functions can be reused
when not doing readback.
Of course that still leaves the need for DXVA1. It might be easier
to start with that, but it possibly is also less useful to have.


More information about the MPlayer-dev-eng mailing list