[FFmpeg-devel] [RFC] 10-bit H.264 decoding support

Jason Garrett-Glaser darkshikari
Sun Oct 31 19:06:29 CET 2010


Over at x264, we're about to commit a few thousands of lines of asm
for x264's 10-bit encoding, which will make it about 4.4 times faster
(fast enough to be usable).

10-bit (en|de)coding gives the following advantages:

1.  ~15% better compression *of typical 8-bit content*.  In terms of
"where this comes from", this gain is split roughly evenly (at typical
bitrates) between:
a.  Higher precision output (in reality, this would be dithered down,
but you'd still get the visual benefit).
b.  Higher precision intermediate values in (en|de)coding a frame.
c.  Higher precision reference frames.

2.  Almost complete elimination of banding artifacts.  See:
http://kuukunen.net/misc/banddenoise_8bit.png vs
http://kuukunen.net/misc/banddenoise_10bit.png

And that's without a proper 10-bit dithering function.

3.  AVC Intra support (a rather nice thing to be able to put on the
features list).

Now we need libavcodec to support this; just having the JM reference
for a decoder is not very useful!  Here's some possible options:

1.  Localize all modifications of pixels to function pointer-accessed
functions.  Add a "pixel size" integer in the context, either 1 or 2,
used in cases where pixel addresses are calculated (partitions).  Make
"linesize" refer to the number of bytes, not the number of pixels (I
think this is already true).  Then template every single pixel
function accordingly, and load them on runtime based on the bit depth.

2.  Just template the whole decoder.  Slightly faster, but far larger
binary and potentially a big symbol hassle.

3.  Something else?

Thoughts, comments?  Bikesheds not welcome.

Dark Shikari



More information about the ffmpeg-devel mailing list