[MPlayer-dev-eng] DxVA in MPlayer

Uoti Urpala uoti.urpala at pp1.inet.fi
Fri Nov 19 13:15:38 CET 2010


On Fri, 2010-11-19 at 12:55 +0200, Georgi Petrov wrote:
> I'm Georgi Petrov and a year ago I developed the Direct 3D renderer.
> Now I'm interested in implementing DxVA support in MPlayer. Yesterday

I don't know about the Windows/DxVA specific issues, but I've worked on
VDPAU and can give some comments about the general infrastructure
(haven't checked whether the API FFmpeg provides for DxVA has any
significant differences).

> 2. Once the EVR custom presenter is ready, FFmpeg already has the
> needed infrastructure for offloading computations for H.264, VC-1,
> MPEG-2 and MPEG-4 ASP. Excuse me if I don't get it, but each one of
> these standards should have an "accelerated" implementation, which
> from MPlayer's point of view is a new codec.

They have a new codec type, but that's mainly to identify the decoder
used ("DxVA h264 decoding"); there should be no need to implement a new
codec module. See below.

>  Then inside the codec
> there are places, where one can hook and transfer the decoding part to
> the DxVA/VA API/VDAPU at any given stage (IDCT, MO and so on). Is this
> right?

Most of the new code/functionality will be in the VO which interfaces
with the outside APIs, not in a codec module. A high-level view of the
architecture is as follows:

The "decoder" used is vd_ffmpeg, but when used with a
hardware-accelerated format its output is not raw pixels but rather
packets suitable to be fed to the actual hardware decoding
implementation. The VO implementation will then feed these to the
hardware decoder to create the displayed picture.

Note that the above implies that normal video filters will not work when
using hardware decoding, as the image will go through the filter chain
from codec to VO in encoded form. It would be possible to implement
hardware-accelerated decoding as a separate step before the VO, but no
implementation of that exists yet (and it would be more complicated and
likely slower, as after hardware decoding the picture typically resides
in video RAM; making it available to normal filters would require moving
it to system RAM and then back).

I added some features to improve buffering and timing with VDPAU; if
you're looking at svn code then those won't be available there.

> 3. As far as I understand this approach is used by VA API and VDPAU.
> Can I reuse their infrastructure as well?

There isn't that much infrastructure outside the VOs, and you can't use
the same VO code on Windows.

>  On this page I see a VA API
> patches: http://www.splitted-desktop.com/~gbeauchesne/mplayer-vaapi/
> 
> Are they part of MPlayer already or not?

They're not. They'd at least need some changes; they now change codec
selection in a way that would make things work worse in practice (the
separate "-va" switch and the way it's implemented), and there are some
issues with the added new VO (I haven't gone over it in detail).

> 4. Can somebody, who has already worked on VA API/VDPAU
> implementations give me some clue? How much work is involved in
> getting the whole chain work? Are most of the puzzle pieces in place
> or I will have to dig deeper into understanding how H.264 / VC-1 and
> so on work on a pure codec level?

That shouldn't be needed unless DxVA does something weird to require it.




More information about the MPlayer-dev-eng mailing list