[FFmpeg-devel] hwaccel infrastructure in libavcodec
Reimar Döffinger
Reimar.Doeffinger
Wed Mar 16 19:31:44 CET 2011
On Wed, Mar 16, 2011 at 04:40:30PM +0100, Gregor Riepl wrote:
> >> Well, for one, I don't think there is any reason why VDPAU deinterlacing
> >> should not be offered by ffmpeg.
> >
> > I do not think this would work (the way you seem to expect it works):
>
> Looks like I'm being too naive here.
> That's why I'm asking for help from all experienced developers here,
> after all. :)
> Can you elaborate where the problem lies in my suggestion?
If I remember right, deinterlacing in VDPAU is not a decoder option,
but something you apply separately after decoding.
This means that you should actually be able to write a libavfilter
doing it. The naive and currently supported approach would however
mean uploading the interlaced and downloading the deinterlaced frame
again. Particularly if you do not leave additional time for processing
but instead wait for the deinterlaced frame performance is going to be
horrible.
So a good implementation would need
1) a special AVFrame type where the actual data resides in video memory
2) possibly auto-inserted filters that convert from and to this special
AVFrame (i.e. upload to/download from GPU memory)
3) "Asynchronous" filters where frames are pumped in as fast as possible and
frames come out as fast as they are decoded, with a limit on the queue size.
Similarly to codecs I expect this is a bit of an issue since we currently
have no way to say "I can't accept this data, but please try again (and possible
accept this output frame)".
> > Of course, "playing" 1080 at 30 works fine with Ion, but did you test *decoding*?
> > (I didn't, so it may well work fine.)
>
> Ok, it looks like I used the wrong wording... It may very well be that
> transferring decoded frames from video memory back to RAM would kill all
> the performance benefits.
To a degree. Also since that transfer causes cache issues.
There is also the issue that hardware decoders tend to not get as much faster
with simple input, so it might e.g. be 2x faster with H.264 level 5 CABAC
but 5x slower with H.264 low-bitrate CAVLC that CPU decoding.
> But is hwaccel really designed to do decoding, instead of playback? The
> device specific (virtual) pixfmts exist exactly for this reason, I believe.
Designed or not, I don't think there is an actual complete implementation
that actually uses it that way yet?
So I'd expect it to not quite work any way.
More information about the ffmpeg-devel
mailing list