[FFmpeg-user] (de)-interlacing question
Andy Furniss
adf.lists at gmail.com
Thu Nov 24 15:01:43 EET 2016
Toerless Eckert wrote:
> On Tue, Nov 22, 2016 at 07:30:48PM +0000, Andy Furniss wrote:
>> Toerless Eckert wrote:
>>
>>> Well, but what i am claiming is that they where interlacing
>>> progressive HD to create interlaced SD.
>>
>> Maybe, if the master is 720p50. I don't think they would
>> interpolate from 720p25 to do that though.
>
> Yepp. Checked a new recording, and the H264 HD TS from Astra is
> 720p50 with actual 50 frames, the mediaportal H264 mp4 file is 720p50
> but actually 720p25 with duplicated frames, and the MPEG2 SD TS from
> Astra is 720i50.
>
> So the good news is that all the SD bits are full origin bits, but
> that still leaves me wondering how to best deal with deinterlacing.
> The threads i can find comparing different deinterlacing options are
> quite inconclusive to me.
Depends what you want/need to do. Personally I wouldn't de-interlace
anything I wanted to keep, but then I wouldn't recode either - I mean
gigs are far smaller than they used to be.
If you must recode then it's possible to code h264 as mbaff - though
care is needed WRT field order so you don't end up trashing.
yadif=1 for field rate seems mostly good enough. mcdeint can be better,
but takes ages. Some of the others I find on SD that's going to get
scaled on playback, look a bit crap on diagonals.
Depending on what GPU/TV you have you could in theory get a nice
de-interlace on playback. Intels motion-adaptive vaapi looked OK when I
tested it some time ago. It's even possible, though tricky, to get some
TVs to deint for you, if they automagically deint when in an interlaced
mode.
> *sigh*
>
>>> I thought it might have gotten a lot easier through all the
>>> experience collected with motion estimation. Aka: work in the
>>> DCT domain, interpolate motion vectors and residual error - or
>>> something like that.
>>
>> AIUI encoders get it easy in comparison to interpolation. An
>> encoder has the ground truth for reference, so even if it can't
>> find good motion vectors it can correct the difference with the
>> residual or intra code a block.
>
> Use ground truth from 50p recordings to create 25p reference streams
> to train a neural network. Nnedi already seems to use a neural
> network for deinterlacing. Would guess it's using a similar
> approach.
IIRC it just scales up fields - albeit nicely.
I've never seen a paper that uses neural networks - which doesn't mean
there isn't one.
More information about the ffmpeg-user
mailing list