[MEncoder-users] compression: 1080i versus 720p

Raimund Berger raimund.berger at gmail.com
Wed Dec 23 22:53:40 CET 2009


Andrew Berg <bahamutzero8825 at gmail.com> writes:

> On 12/21/2009 7:28 PM, Raimund Berger wrote:
>> It does, not in terms of pixel value data if you will, but in terms of
>> compressability.
> I meant raw video. I know that deinterlacing improves compressibility.
> The comment
>> When you deinterlace the 1080i video, you're reducing it to 30fps.
> states that deinterlacing reduces the frame rate, which directly affects
> how much raw video data there is. There is no mention of compressibility
> here. Deinterlacing does not reduce the frame rate, therefore the
> statement is false. Also, I doubt highly that deinterlacing reduces the
> required bitrate (at a given constant quality) nearly as much as the
> reduced frame rate (or technically 60 fields for 1080i vs. 60 full
> frames for 720p).

It's already been said that interlaced material has no frame rate. So
when somebody says "reducing the frame rate" you're supposed to apply
exactly the line of logic which has been laid out in detail already.

Then, again to data amounts and bitrate. Deinterlacing interpolates, or
otherwise calculates, some images pixels from others. And there you
already have the reduction. Because the data input is reduced and parts
of the image constructed algorithmically.

As to the bit rate, the above process obviously plays into the hands of
the various intra and inter prediction algorithms applied in
compression. Consider for example the infamous "combing" you'd get if
you just throw two fields together. Besides from being visually
unpleasant that just doesn't compress well, because every other line
can't easily be predicted from the previous one (which breaks down to
the block level, of course).

Look, it's not about being right or wrong. It's also not about you
doubting this or that. The tech just works as it does.


More information about the MEncoder-users mailing list