[FFmpeg-devel] [RFC/PATCH]Interlaced lossless
Dave Rice
dave at dericed.com
Tue Feb 12 02:24:22 CET 2013
On Jan 28, 2013, at 10:50 AM, Peter B. <pb at das-werkstatt.com> wrote:
> Quoting Tim Nicholson <nichot20 at yahoo.com>:
>
>> On 26/01/13 12:05, Peter B. wrote:
>>> Which commandline argument is used to tell lossless codecs like FFv1 or Huffyuv whether the material is to be considered progressive or interlaced - and where is it stored: in the container or the codec - or both?
>>
>> '-top' can be used as both an input an output option to set interlace
>> flags. Some codecs embed this information in the stream. For others,
>> e.g. rawvideo there is no place for it there so it has to be in the
>> container. This is why the quicktime spec mandates providing this
>> information in a moov atom.
>
> I'll try using "-top". Thanks for the hint.
> It is indeed a good question if, for example, FFv1 is storing interlaced flags in the stream or requires to codec to do so?
>
>
>> For lossless codecs I'm not sure that the codec will work any
>> differently between interlace and progressive, you just nee to ensure
>> that the final output is flagged correctly.
>
> The good thing about lossless is, that the bits will be preserved. However, I do suspect that, depending on the encoding algorithm, different compression ratios might be achieved if the material is encoded differently, depending on whether the source was interlaced or progressive.
>
> For example, usually codecs looove adjacent areas with similar colors, right? Even lossless ones (I hope I'm not talking bullshit here).
> So, if the encoder encodes fields (for interlaced) rather than frames, there would be more similar-colored pixels adjacent to each other than if encoded progressively.
> Same goes for progressive material: If encoded field-wise, you might lose compression possibly gained by adjacent similar-colored pixels/areas.
>
> But that's just my personal knowhow of encoding. So if I'm completely wrong, please tell me so I can get a better understanding.
To test this, I tried to encode the bottom field of an interlaced video with ffv1, then the top field to another ffv1 file, then the full frame.
I made my sample with this:
ffmpeg -i http://archive.org/download/dn2008-1231_vid/dn2008-1231.mpeg -t 60 -c copy interlace_test.mpeg
Then ran:
ffmpeg -y -i interlace_test.mpeg -vf field=bottom -c:v ffv1 -f rawvideo bottom
ffmpeg -y -i interlace_test.mpeg -vf field=top -c:v ffv1 -f rawvideo top
ffmpeg -y -i interlace_test.mpeg -c:v ffv1 -f rawvideo fullframe
The resulting filesizes were:
bottom = 169,463,250 bytes
top = 170,381,647 bytes
fullframe = 297,632,691 bytes
bottom+top = 339,844,897 bytes
Thus encoding the bottom and top fields separately used 14% more bytes than the full frame. I would have expected the encoding of fields of interlaced source to be slightly smaller than encoding full frames, but 14% is a substantial difference. Is there an error with my test?
Dave Rice
More information about the ffmpeg-devel
mailing list