[FFmpeg-user] minterpolate performance & alternative

Mark Filipak (ffmpeg) markfilipak at bog.us
Fri Jan 29 00:21:56 EET 2021


On 01/28/2021 04:34 PM, Paul B Mahol wrote:
> On Thu, Jan 28, 2021 at 10:23 PM Mark Filipak (ffmpeg) <markfilipak at bog.us>
> wrote:
> 
>> Synopsis:
>>
>> I seek to use minterpolate to take advantage of its superior output. I
>> present some performance
>> issues followed by an alternative filter_complex. So, this presentation
>> necessarily addresses 2
>> subjects.
>>
>> Problem:
>>
>> I'm currently transcoding a 2:43:05 1920x1080, 24FPS progressive video to
>> 60FPS via minterpolate
>> filter. Apparently, the transcode will take a little more than 3 days.
>>
>> Hardware:
>>
>> There are 4 CPU cores (with 2 threads, each) that run at 3.6 GHz. There is
>> also an NVIDIA GTX 980M
>> GPU having 1536 CUDA cores with a driver that implements the Optimus,
>> CUDA-as-coprocessors architecture.
>>
>> Performance:
>>
>> During the transcode, ffmpeg is consuming only between 10% & 20% of the
>> CPU. It appears to be
>> single-threaded, and it appears to not be using Optimus at all.
>>
>> Is there a way to coax minterpolate to expand its hardware usage?
>>
>> Alternative filter_complex:
>>
>> minterpolate converts 24FPS to 60FPS by interpolating every frame via
>> motion vectors to produce a 60
>> picture/second stream in a 60FPS transport. It does a truly amazing job,
>> but without expanded
>> hardware usage, it takes too long to do it.
>>
>> A viable alternative is to 55 telecine the source (which simply duplicates
>> the n%5!=2 frames) while
>> interpolating solely the n%5==2 frames. That should take much less time
>> and would produce a 24
>> picture/second stream in a 60FPS transport -- totally acceptable.
>>
>> The problem is that motion vector interpolation requires that minterpolate
>> be 'split' out and run in
>> parallel with the main path in the filter_complex so that the interpolated
>> frames can be plucked out
>> (n%5==2) and interleaved at the end of the filter_complex. That doesn't
>> make much sense because it
>> doesn't decrease processing (or processing time) and, if the fully
>> motion-interpolated stream is
>> produced anyway, then output it directly instead of interleaving. What's
>> needed is an interpolation
>> alternative to minterpolate.
>>
>> Alternative Interpolation:
>>
>> 55 telecine with no interpolation or smoothing works well even though the
>> n%5==2 frames are combed
>> but decombing is desired. The problem with that is: I can't find a
>> deinterlace filter that does
>> pixel interpolation without reintroducing some telecine judder. The issue
>> involves spacial alignment
>> of the odd & even lines in the existing filters.
>>
>> Some existing filters align the decombed lines with the input's top field,
>> some align the decombed
>> lines with the input's bottom field. What's desired is a filter that
>> aligns the decombed lines with
>> the spacial mean. I suggest that the Sobel might be appropriate for the
>> decombing (or at least, that
>> the Sobel can be employed to visualize what's desired).
>>
>> Sobel of line y:   ______/\_____________/\_________ (edges)
>> Sobel of line y+1: __________/\_____________/\_____
>> Desired output:
>>          line y:   ________/\_____________/\_______ (aligned to mean)
>>          line y+1: ________/\_____________/\_______ (aligned to mean)
>> I could find this:
>>          line y:   ______/\_____________/\_________
>>          line y+1: ______/\_____________/\_________ (aligned to top line
>> edges)
>> and I could find this:
>>          line y:   __________/\_____________/\_____ (aligned to bottom
>> line edges)
>>          line y+1: __________/\_____________/\_____
>>
>
> Sorry, but I can not decipher above stuff. Does anybody else can?

I assume you refer to the "Alternative Interpolation" section.

Suppose I explain like this: Take any of the various edge-detecting, deinterlacing filters and, for 
each line-pair (y & y+1), align both output lines (y & y+1) to the mean of the input's 
line(y).Y-edge & line(y+1).Y-edge. To do that, only single line-pairs are processed (not between 
line-pairs), and no motion vector interpolation is needed.


More information about the ffmpeg-user mailing list