[MPlayer-dev-eng] [PATCH] vda support for mplayer

Reimar Döffinger Reimar.Doeffinger at gmx.de
Sat Aug 18 11:48:31 CEST 2012


Hello,
I might be missing/misunderstanding some things, but I want
to avoid a solution that makes things unnecessarily complex for
everyone.

On Fri, Aug 17, 2012 at 03:45:32PM +0800, Xidorn Quan wrote:
> On Fri, Aug 17, 2012 at 2:22 PM, Reimar Döffinger
> <Reimar.Doeffinger at gmx.de>wrote:
> 
> > Why can't FFmpeg then just provide the data as a completely normal image?
> > Or mire specifically, why is the code that is in vd_ffmpeg not in FFmpeg
> > instead?
> >
> 
> FFmpeg just passes the data from VDA directly to the player.
> All hardware accelerations which implemented as HWAccel in FFmpeg
> are working this way.

None of them directly provide decoded data though, plus none of the
decode into system memory, they all decode into video memory only.
(note there is one exception: CrystalHD, and that one does not use
HWACCEL framework, though also for other reasons).

> To make these code in FFmpeg, we need to
> implement another decoder in FFmpeg, and that might cause another
> addition data copying.

Since you already get the data in proper yuv format in system memory
I can't see why it would ever cause an additional copy.
(Note: you would _not_ support DR1 in that FFmpeg code, no user provided
buffers but only direct export of the VDA buffers).

> In fact I did implement the VDA decoder directly in FFmpeg several
> weeks ago, but developers for FFmpeg thought that FFmpeg had
> provided proper interface for player to use, and implementing decoder
> independently made code bloat. I think they might be right because
> there are many other things to do in decoding H.264 video other than
> decoding frames which the VDA only provides.

I didn't follow the discussion, so I don't know for sure.
It might be they missed how VDA differs from other supported hardware
accelerators.
Or maybe both things are true: The proper hwaccel infrastructure should
be used/supported but there still could/should be a decoder to decode
directly to raw.

> > Might be just too early in the morning for me to understand it, but at the
> > moment I don't see why not adding that code should add a memcpy.
> > Also I'd think that -vo gl should not require a memcpy any more (and is a
> > huge lot faster and thus the preferred vo for me on OSX - though probably
> > won't be when the raw data is in yuyv instead of yv12 I guess).
> >
> 
> Sorry but I'm not familiar with vo_gl. I found that vo_corevideo
> always copys data from the provided mp_image to the buffer created by
> itself.

That is just a bug/bad implementation. I have tried fixing it, but the
shared buffer support makes it messy and for me personally -vo gl works
so much better I have a hard time caring about the corevideo vo any
longer.

> > Have a log file? I'd need an idea where it loops first.
> 
> I know why it loops. The initialization of VDA is now in init_vo
> so when it is called, it have been too late to find the error since
> the codec have been initialized successfully. I think I should move
> the initialization code from init_vo to init in vd_ffmpeg and let it
> break the initialization process if it fails.

Great that you figured it out, I suspected that, MPlayer currently can't
handle switching codecs after it already has decided on one.


More information about the MPlayer-dev-eng mailing list