[Ffmpeg-devel] Parallelizing the h.264 Decoder
Mon Nov 13 02:35:34 CET 2006
i'm currently trying to make a port of the h.264 decoder to multicore
systems with special emphasis on the cell processor.
Since the cells cores are limited in local memory, i want to parallelize
the decoding of an actual frame, not just decode multiple frames at once.
To make the port easier i decided to go with a fixed distribution of the
decoding steps to the different cores, rather than with an dynamic model.
Therefore i would have one or more core do the cabac decoding and one or
more cores to the transformation, quantization and filtering.
As i understand the h264.c the best place to do the partition would be
in the decode_slice function. In that function decode_mb_cabac and
hl_decode_mb are called.
However at the moment, those functions are called directly after each
other for each macroblock. This would be suitable for a pipelined
approach, but i'm afraid that one core won't suffice to do the cabac
My question is:
With the current sources would it be possible to first decode the cabac
for all macroblocks (preferably split over a few cores) and then do the
rest of the decoding process?
I realize that this will require some changes, but i'm not certain how
big these changes are.
More information about the ffmpeg-devel