[FFmpeg-user] Deinterlace and change framerate with -vcodec copy

Peter Bašista pbasista at gmail.com
Sun May 22 14:38:49 CEST 2011

Hello once again!

>> That would mean that in 25 fps interlaced video some of the adjacent
>> fields would be 40 ms apart and some would be 0 ms apart. I just
>> wanted to merge the ones that are 0 ms apart into single frames. And I
>> thought it might be posiible without (much) reencoding.
> That would be called Segmented Field and has a shorthand of 25PsF. Most
> films are broadcast as 25PsF in 50Hz territories although the MPEG
> frames will indicate that they are interlaced.
> http://en.wikipedia.org/wiki/Interlaced
> http://en.wikipedia.org/wiki/Progressive_segmented_frame

Thank you for explaining this to me.

>> Now I know it can not be done like this. There are not any adjacent
>> fields which are 0 ms apart. That's why it would be inappropriate to
>> simply merge them.
>>> It is not something I would want to do, but I do have a vague
>>> recollection of some bit of software being able to twiddle with the
>>> flags on an mpeg2 video stream to alter the interlaced/progressive
>>> flags. But since, in general, doing so would be horrible, I've forgotten
>>> what it is.
>> Now I am a little bit confused. Just to make it clear, here you are
>> talking about video format or video codec? Mpeg-2 transport stream
>> (mpegts) or mpeg-2 video codec (mpeg2video)?
> Each MPEG2/4 frame includes a flag to indicate if that frame is
> interlaced and whether its top-field-first or bottom field first.

I did not know that either. Thank you again!

>> But either way, I thought that you have just been trying to point out
>> that what makes a video interlaced or deinterlaced is the whole nature
>> of encoding the frames, fileds, etc. And now you say something about a
>> flag that could change a video from interlaced to deinterlaced and
>> vice versa? Just like that? By altering flag? How is that possible?
> It is possible to have a piece of software that reads the transport
> stream and alters the interlaced/progressive/TFF/BFF flags on each
> frame. I do not know of software that can do that.

All right. But let's say that there is a software which alters these
flags. What exactly would this kind of software do?

Take, for example, a frame with flags: interlaced (I), TFF. How could
this kind of software make a progressive (P) frame out of this
interlaced frame just by altering the flags? Because that's my point.
I don't see a way to merge two half fields of a frame into one full
field without reencoding.

I mean: the half fields are interlaced, right? The first one contains
the odd lines of a frame, the second one the even lines of a frame.
So, if you just put the data they contain in a single field one after
another, you get a pretty messed up frame, which is not the same as
the original full frame, am I correct?

>>> Ok. Think about how h.264 is encoded. It contains few 'I' frames
>>> (effectively a full frame - only compressed) but it also contains 'P'
>>> frames and 'B' frames that both just encode the differences between
>>> other frames and this frame. If you just simply throw away frames, as
>>> your opinion suggests should be possible, then you are just throwing
>>> away frames that OTHER frames REQUIRE in order to be able to create them.
>> All right, here we completely don't understand each other. I do not
>> suggest that it is possible to throw away frames. By far not!
>> I just want the frames to "remain longer on the screen" when playing
>> :) ... For example, a frame will not to be displayed for 1/25th of a
>> second but for 1/24th of a second. And that, in my opinion, should be
>> pretty simple to achieve, ... I would expect that changing some video
>> codec flag would do the trick.
>>> So, no, you cannot just throw away frames in a h.264 stream (unless your
>>> h.264 stream is 'I' frame only, which yours will not be).
>> All right, we made that clear and I know I can not.
> Each frame of video and audio in your transport stream includes a
> Presentation Time Stamp. Audio frames are a different size to video
> frames. The PTS is used to (a) present the video/audio in sync and (b)
> to present the video/audio at the correct speed.

That's another thing I did not know. Thank you!

Do I understand correctly that the PTS are present in a mpeg transport
stream? What happens to them if I change the video format to mkv or
avi, for example? Are they still present and provide video/audio sync
or these formats provide different ways of video/audio sync?

> It is not inconceivable for a piece of software to read the transport
> stream and alter the PTS of every frame to make the video run slow.

Now that would be really awesome! I think it is exactly what I am looking for.

> Now, what happens to the audio.
> The audio stream is encoded at (say) 48,000
> samples per second. If you have altered the PTS of the video and audio
> frames so that they are presented at 96% of full speed then the audio
> sample rate needs to be altered to 46,080 samples per second - which is
> not (as far as I know) something that can be done.

Well, as far as I know, it is no problem to resample an audio file to
arbitrary sample rate. For example, using sox:

sox input.wav output.wav rate 46080

The resulting file has the same audio length but its sample rate is
46,080. And it can be played just fine using mplayer.

You can also just alter the header of an input file and force the
different sample rate like this:

sox -r 46080 input.wav output.wav

In this case, provided that the input file has the sample rate of
48,000, the resulting file will take more time to play, but its size
will be exactly the same. And it still can be played just fine using

> I think your idea of messing with the file is doomed.

:) I still think there is a way to do it correctly.

Okay. Let's not worry about the audio for now. Let's suppose it can
have arbitrary sample rate. Now, if I would like to change the PTS of
the video and audio frames, how can I do that? Is it possible to do it
with ffmpeg?

Even if I had to do some audio reencoding afterwards, it is still much
less time consuming than video reencoding. That's my point after all.
I want to avoid video reencoding whenever possible.

Peter Basista

More information about the ffmpeg-user mailing list