# [FFmpeg-devel] inverse weighitng tables for DV seems to be wrong

Roman Shaposhnik rvs
Sat Feb 28 06:02:42 CET 2009

```On Feb 27, 2009, at 8:10 PM, Michael Niedermayer wrote:
>> True. But as far as weighting tables are concerned the only thing
>> that quantization really does is it changes the probability of level
>> distribution, right? In a sense that probability of some levels
>> get to 0 with some other increasing because that's where the missing
>> ones map now.
>>
>
>> To restate -- the goal of designing optimal weighting/unweighting
>> tables seems to be to minimize the error on the most probable
>> levels after the quantization.
>
> no
>
> let me give you a hypothetical example
> you can store values 0-127 in 8 bits
> value 128 will need 10 bit

I could sort of see how 0-127 would require 8bits
(if there's a need to store a sign bit) but why 128
would require 10? Anyway, I think this doesn't
invalidate your later point. So it doesn't really matter.

> you have 100 values to store all are 128 and you have 800bit space
> you can store 100 127 values in 800 bit giving you a distortion of
> 1*100 = 100
> you store 128 values until you run out of the bit budget, you store
> 80 128 values, the remaining 20 will be 0, distortion will be
> 100*100*20 = 200000
> your result is worse by a factor of 2000 in terms of sum of squared
> errors

Got it! But, bear with me at least one more time, the decision of
storing 127 instead of 128 is really a quantization decision. To
build off of your hypothetical example: you might have a weighting
table that would divide everything by 2 so that you start with
a set containing a hundred values of 256. You then apply the
weighting and get your set of 128 (and then the rest of your
example applies verbatim).

Thus it seems that the only help that a fine-tuned weighting
matrix can provide is that *some* of the values could get
closer to 0 (and thus require less quantization). But other
than that -- quantization decisions will NOT be affected.

> also i would suggest that you read some paper (any paper actually)
> about rate distortion and/or quantization.

I would love to. Any particular suggestions? Google turns up
mostly specialized works. The kinds that don't provide much
background.

> repeating this in a blog post would be pointless, theres plenty of
> existing literature (note though wikipedia is NOT)

Again, any pointers would be greatly appreciated!

>>> and instead of tuning tables, near optimal quantization is harder
>>> but possible
>>> too, and this will lead to significant gains (it does for other
>>> codecs ...)
>>> to do it
>>> the most obvious way would be to first apply the common RD trellis
>>> quantization to a group of 5mbs
>
>>> (there is IIRC no bit sharing
>>> possible acorss these 5mb groups)
>>
>> There is. In DV all 5mb's share the common bit-space of a singel DIF
>> block.
>
> i think you misunderstood what i said
> what i remember about dv is that each block had its own space when
> that
> overflowed there was space for each MB and when that overflowed
> there was
> space for each 5mb group? but there is no space for 10mb 100mb or such
> groups
> is my understanding correct?

It is. Now I see what you meant.

Thanks,
Roman.

```