[FFmpeg-devel] donation for snow

Jason Garrett-Glaser darkshikari
Fri Nov 21 04:47:15 CET 2008


On Thu, Nov 20, 2008 at 5:38 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
> On Thu, Nov 20, 2008 at 04:23:45PM -0800, Jason Garrett-Glaser wrote:
>> > Anyway, iam not trying to defend j2k or wavelets but you are hand picking
>> > images and parameters to make j2k look like crap, this surely does confirm
>> > that j2k can be very significantly worse than jpeg but says little about the
>> > average behavior. Which may or may not be better than h264, though iam pretty
>> > sure jpeg2k will perform vastly better than its predecessor jpeg on the
>> > average natural image.
>>
>> I would have tweaked JPEG2K much harder than I did, but the software
>> didn't allow any significant changes to parameters.  I'm still looking
>> for a good JPEG2K encoder that is both good quality-wise and versatile
>> as x264 is, so that it can be appropriately tweaked.
>>
>
>> As you hinted, I also strongly suspect that x264 is much better at
>> encoding H.264 than that JPEG2K encoder is at encoding JPEG2K, so its
>> still not an entirely fair test.  But its hard when there don't seem
>> to be any good ones... and before you say "write one," if I was going
>> to write one, I would write a better image format ;)
>
> yes, i of course agree with you on all these points ...
>
> one thing that could be done would be post processing with overcomplete
> wavelets or some cyclically shifted wavelet, mplayer contains a filter for
> that, similarly -vf spp=XY should help jpeg, it also might be interresting
> to try the "wrong" pp like spp for j2k or h264 ...
> not that i thin pp would change the difference between j2k & jpeg in this
> comparission in j2ks favor ...
>
> and last, iam curious how intra only snow does against j2k, though i dont
> expect it to do much better for the kind of test image you used.

One general problem is that I think that any truly good intra coding
method needs to be capable of being completely spatially localized,
that is, small features need to be coded locally only.  Wavelets have
the problem where they're forced to code small features on a large
scale, which is why it does particularly badly on my test image.  In
particular, I can imagine the following situation, going from x264's
AQ logic:

1.  The edge of a sharp, high-detail area needs a high quantizer.
2.  The flat, subtle texture of a low-detail area needs a low quantizer.

In H.264, as long as these are in separate macroblocks, you can use
separate quantizers for them.  However, in a wavelet format, it might
be difficult to do that considering that you can have single frequency
coefficients that cover both areas, and you can't have half of a
coefficient be one quantizer and half be another...

So any truly good format would need the ability to be as spatially
localized as possible when it is optimal to be so.

(In-frame motion vectors would be useful as well...)

Dark Shikari




More information about the ffmpeg-devel mailing list