[FFmpeg-user] decomb versus deinterlace

pdr0 pdr0 at shaw.ca
Sun Apr 19 09:56:54 EEST 2020

Mark Filipak wrote
> I would love to use motion compensation but I can't, at least not with
> ffmpeg. Now, if there was 
> such a thing as smart telecine...
> A A A+B B B -- input
> A A     B B -- pass 4 frames directly to output
>    A A+B B   -- pass 3 frames to filter
>       X      -- motion compensated filter to output
> Unfortunately, ffmpeg can't do that because ffmpeg does not recurse the
> filter complex. That is, 
> ffmpeg can pass the 2nd & 4th frames to the output, or to the filter, but
> not both.

What kind of motion compensation did you have in mind? 

"recurse" works if your timestamps are correct.  Interleave works with
timestamps. If "X" has the timestamp of position 3 (A+B or the original
combed frame)

>> Double rate deinterlacing keeps all the temporal information. Recall what
>> "interlace content" really means. It's 59.94 distinct moments in time
>> captured per second . In motion you have 59.94 different images.
> That may be true in general, but the result of 55-telecine is
> A A A+B B B ... repeat to end-of-stream
> So there aren't 59.94 distinct moments. There're only 24 distinct moments
> (the same as the input).

Exactly !

That refers to interlaced content. This does not refer to your specific
case. You don't have interlaced content. Do you see the distinction ? You
have 23.976 distinct moments/sec, just "packaged" differently with repeat
fields . That's progressive content. Interlaced content means 59.94 distinct

>> Single rate deinterlacing drops 1/2 the temporal information (either
>> even,
>> or odd fields are retained)
>> single rate deinterlace: 29.97i interlaced content => 29.97p output
>> double rate deinterlace: 29.97i interlaced content => 59.94p output
> There is no 29.97i interlaced content. There's p24 content (the source)
> and p60 content that combs 
> frames 2 7 12 17 etc. (the transcode).

I thought it was obvious but those comments do  not refer to your specific
side case. You don't have 29.97i interlaced content . 

You apply deinterlacing to interlaced content.  You don't deinterlace
progressive content (in general)

29.97i interlaced content are things like soap operas, sports, some types of
home video, documentaries

> Once again, this is a terminology problem. You are one person who
> acknowledges that terminology 
> problems exist, and that I find heartening. Here's how I have resolved it:
> I call a 30fps stream that's 23-telecined from p24, "t30" -- "t" for
> "telecine".
> I call a 30fps stream that's 1/fieldrate interlaced, "s30" -- "s" for
> "scan" (see Note).
> I call a 60fps stream that's 23-telecined, then frame doubled, "t30x2".
> I call a 60fps stream that's 55-telecined from p24, "t60".
> Note: I would have called this "i30", but "i30" is already taken.
> Now, the reason I write "p24" instead of "24p" -- I'm not the only person
> who does this -- is so it 
> fits an overall scheme that's compact, but that an ordinary human being
> can pretty much understand:
> 16:9-480p24 -- this is soft telecine
> 4:3-480t30 -- this is hard telecine
> 16:9-1080p24
> I'm not listing all the possible combinations of aspect ratio & line count
> & frame rate, but you 
> probably get the idea.

It's descriptive, but the problem is not very many people use this
terminology. NLE's , professional programs,  broadcast stations, post
production houses do not use this notation.  e.g "480p24" anywhere else
would be native progressive such as web video. Some places use pN to denote
native progressive . So you're going to have problems with communication...I
would write out the full sentence

> Erm... I'm not analyzing a mystery video. I'm transcoding from a known
> source. I know what the 
> content actually is.

I thought it was obvious, those comments  refers to "in general" , not your
specific case. More communication issues..

>> There are dozens of processing algorithms (not just talking about
>> ffmpeg).
>> There are many ways to "decomb"  something . The one you ended up using
>> is
>> categorized as  a blend deinterlacer because the top and bottom field are
>> blended with each other. If you examine the separated fields , the fields
>> are co-mingled, no longer distinct. You needed to retain both fields for
>> your purpose
> No, I don't. I don't want to retain both fields. I want to blend them.
> That's what 
> 'pp=linblenddeint' does, and that's why I'm happy with it.

Yes, that should have said retain both fields blended. The alternative is
dropping  a field like a standard deinterlacer

>> There is no distinction in terms of distribution of application for this
>> type of filter.  You put the distinction on filtering specific frames by
>> using select.  You could apply blend deinterlace to every frame too (for
>> interlaced content) - how is that any different visually in terms of any
>> single frame there vs. your every 5th frame ?
> I honestly don't know. What I do know is if I pass
> select='not(eq(n+1\,5)\,3))' to the output 
> unaltered but I filter select='eq(n+1\,5)\,3)',pp=linblenddeint before the
> output, the video looks 
> better on a 60Hz TV. I don't want to pass the progressive frames through
> 'pp=linblenddeint'.

It was meant as a rhetorical question.., failed...

Your combed frame looks exactly like every frame for an interlaced video. If
I took a 59.94p video and converted it to 29.97i  .Every frame would look
like your combed frame. The point is, visually, when looking at individual
frames, combing looks the same, the mechanism is the same. The underlying
video can be different in terms of content (it might be interlaced, it might
be progressive) , but you can't determine that on a single frame

>> You could have used a different filter, maybe a box blur , applied on the
>> same frame selection . Would you still call that "decomb" ?
> Yes, but only because I call the original frame "combed".

Same here - anything that works to reduce the "combing" can be categorized
as a decomb filter, including deinterlacing. Including blurring.  Including
whatever x,y,z filter . It's non specific

>>> Regarding inverse telecine (aka INVT), I've never seen INVT that didn't
>>> yield back uncombed, purely
>>> progressive pictures (as "picture" is defined in the MPEG spec). Can
>>> you/will you enlighten me
>>> because it's simply outside my experience.
>> It happens all the time. This is simply reversing the process. Inverse
>> telecine is the same thing as "removing pulldown". You get back the
>> original
>> progressive frames you started with.
> Okay, that's what I thought. Since INVT produces the original p24, it's
> not combed. I thought you 
> said that inverse telecine can produce combing. My bad. :)


But there is something else called "residual combing" .  It's not the same
thing, it's when fields are slightly misaligned, such as with some old low
quality DVD's . It's not as distinct as combing, the lines are very fine. 

Also, sometimes there are things like cadence breaks - such as when edits
were made before IVTCing . Again, with lower quality productions.  So IVTC
process in software often includes adaptive field matching and additonal
post processing and comb detection. It's not just a fixed pattern.

Sent from: http://www.ffmpeg-archive.org/

More information about the ffmpeg-user mailing list