[FFmpeg-devel] [PATCH] FFV1 improvments

Jason Garrett-Glaser darkshikari
Sun Oct 24 21:36:04 CEST 2010


On Sun, Oct 24, 2010 at 12:32 PM, Jason Garrett-Glaser
<darkshikari at gmail.com> wrote:
> On Sun, Oct 24, 2010 at 12:28 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>> On Mon, Oct 11, 2010 at 01:25:35PM +0200, Michael Niedermayer wrote:
>>> On Mon, Oct 11, 2010 at 09:59:16AM +0000, Loren Merritt wrote:
>>> > On Sun, 10 Oct 2010, Jason Garrett-Glaser wrote:
>>> >
>>> >> On Sun, Oct 10, 2010 at 2:05 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>>> >>> On Sun, Oct 10, 2010 at 01:19:57PM -0700, Jason Garrett-Glaser wrote:
>>> >>>> On Sun, Oct 10, 2010 at 12:25 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>>> >>>>> Hi
>>> >>>>>
>>> >>>>> Will apply patch-set below soon.
>>> >>>>> bikesheds will be politely ignored
>>> >>>>
>>> >>>> Looks good;
>>> >>>
>>> >>> commited
>>> >>>
>>> >>>
>>> >>>> what are the plans for version 2?
>>> >>>
>>> >>> the immedeate plan for ffv1.2 is to add rectangular slice based multithreading
>>> >>> this should make the code much faster without extra delay
>>> >>
>>> >> Another thing you might want to try (since you're changing things...)
>>> >> is Loren's median5 predictor; testing in FFV2 showed that it beat the
>>> >> standard median predictor by a few % in terms of order-0 entropy. ?I
>>> >> don't have it, but I assume he can post it here.
>>> >
>>> > median5(l, tr, l+t-tl, l*2-ll, t*2-tt)
>>> > also, median3(l, tr, l+t-tl) is better than median3(l, t, l+t-tl)
>>>
>>> Thanks, ill look into trying/adding them to ffv1
>>
>> Tried on foreman:
>> 18726558 pred_lor_median3.avi
>> 18655092 pred_old_median.avi
>> 18649580 pred_lor_median5.avi
>> 18609236 pred_T_median5.avi
>>
>> Patches for the test attached
>> Iam somewhat uncertain if the added complexity is worth the 0.25% gain
>> maybe we can find a better predictor where the gain is bigger
>
> How about LPC-based prediction?
>
> Also, here's an interesting idea for you -- do the prediction in
> higher precision, then weight probabilities accordingly. ?Example:
> Suppose you have a complex prediction function that returns "15.7".
> Well, you can't compress a residual of "0.3", but you *do* know that
> 16 is far more likely than 15. ?So there must be a way to "normalize"
> the probability distribution to take this into account.
>
> This will likely make fancier prediction functions much more useful.
>
> Dark Shikari
>

A really trivial way to do this (but not necessarily the best) would
be to just double the contexts for every extra bit of precision, i.e.
make the context [index][low bits of prediction].

Dark Shikari



More information about the ffmpeg-devel mailing list