[FFmpeg-devel] a64 encoder
Sat Jan 17 16:52:00 CET 2009
> reducing the colors before seems quite suboptimal, id rather convert each
> fullcolor "char" with dithering to what is available afterwards or even
> during the elbg code.
I have tried converting to a charset beforehand already some time ago,
it can lead to other problems therefore, depending on what area i
calculate the error. If i do a pixelwise compare, i gotta stick to the
ditherpatterns or error goes up.
Imagine the following two chars: visually both the same. but if
pixelwise compared, they differi quite alot:
If i compare on my meta basis or in RGB/whatsover, the algorithm finds
this both chars are identic.
That was, why i decided to first find the best blocks and then render
the (dithered) charset. If i really happen to get two identic chars in
the end, it doesn't matter too much, as it seems all other replacemenst
so far didn't bring in much error by then. Practically from what i have
seen from my both attempts however, i'd say i can't see any difference
between both methods :-) In that case i should be happy if i can use
the elbg code as is, furthermore it is enough doing so only on the
But well, stay tuned until i've tested and get first results (speed, and
a picture to see) and we'll see :-)
More information about the ffmpeg-devel