[MPlayer-dev-eng] design for motion-adaptive deinterlacer

Michael Niedermayer michaelni at gmx.at
Sun Apr 25 01:47:30 CEST 2004


Hi

On Saturday 24 April 2004 22:09, D Richard Felker III wrote:
> I've been thinking about writing a good motion-adaptive deinterlacer,
> since they all seem to suck. So I'm sending a few ideas to the list
> for feedback (or in case someone wants to implement it since I'm
> lazy... :)
ive also thought about a similar filter a long time ago, but so far i just 
wrote  0 lines of code for it :)

>
> Basic procedure is:
>
> [Note: all operations are applied to the older field]
>
> 1. Identify areas of motion.
>
> Using a simple difference threshold pixel-by-pixel is no good. If the
> threshold is too low you'll get tons of motion from temporal noise,
> while if it's too high, you'll miss slow changes in the low frequency
> components which cause very ugly noticable combing (think of gradual
> change in light level).
>
> So the search for motion needs to be done in a small windowed
> frequency space, with low-frequency coefficients weighted much higher
> than high-frequency ones.
>
> 2. Smooth the motion map.
>
> Tiny components should be eliminated as false positives. The remaining
> components should be expanded at their boundaries in case we missed
> some pixels.
>
> 3. Within the motion map, look for combing.
>
> All local extrema in the vertical direction are potential points of
> combing.
>
> 4. Smooth the combing map.
>
> Same procedure as for the motion map.
>
> 5. Identify pairs of similar size/shape components in the combing map.
>
> If we find such components, generate a conformal map from one to the
> other, mark them as a common region, and use map between them to
> initialize a map of motion transformations.
>
> 6. Perform per-pixel motion estimation.
>
> Comparison function should use a small window with smooth falloff.
> Test within a neighborhood of the region in the combing map. Use
> guesses from step 5 as a starting point, if present. Optionally also
> use motion vectors from the decoding phase as a guide, if they are
> available.
>
> 7. Apply motion compensation to bring the fields in line.
>
> Deform the older field so that it mostly matches with the newer field,
> using the motion vectors/transformations from step 6.
>
> 8. Perform a final combing test.
>
> If any regions of combing remain, blend them away.
>
>
>
> I've performed tests for step 3 and 4 and it looks promising. The
> resulting data also suggested a procedure like 5. I suspect the
> results would be much better with steps 1 and 2 to help us throw away
> false positives (by only checking for combing where there's motion).
>
> I also tested some motion estimation/compensation stuff, but I have no
> experience with that so it came out really bad... :)
yes, ME is IMO the most difficult part, thats also why i never wrote such a 
filter ...
one problem, with finding the true motion, which is pretty much the goal here 
and in similar filters, is that motion estimation tends to fail in areas 
without enough details, simply because there are many well matching vectors 
and the correct one is rarely the best matching, think of nearly constant 
color areas or a simple edge, locally a edge often looks quite similar at 
many points
one way to "solve" this is to define a score function which does not only take 
the matching error but also the difference between adjacent motion vectors 
into account, and then "just" find the global minimum ...

[...]
-- 
Michael
level[i]= get_vlc(); i+=get_vlc();		(violates patent EP0266049)
median(mv[y-1][x], mv[y][x-1], mv[y+1][x+1]);	(violates patent #5,905,535)
buf[i]= qp - buf[i-1];				(violates patent #?)
for more examples, see http://mplayerhq.hu/~michael/patent.html
stop it, see http://petition.eurolinux.org & http://petition.ffii.org/eubsa/en




More information about the MPlayer-dev-eng mailing list