[FFmpeg-devel] Politics

Soft Works softworkz at hotmail.com
Wed Dec 22 12:23:00 EET 2021



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Michael
> Niedermayer
> Sent: Tuesday, December 21, 2021 7:39 PM
> To: FFmpeg development discussions and patches <ffmpeg-devel at ffmpeg.org>
> Subject: Re: [FFmpeg-devel] Politics
> 
> On Mon, Dec 20, 2021 at 11:20:38PM +0000, Soft Works wrote:
> [...]
> > > Enlightened by this, let's go back to to your example. The EU frame has a
> >
> > > duration of 40ms, the US frame 33.3ms. The US frame start 0.02ms later
> and
> >
> > > is fully included in the duration of the EU frame.
> >
> >
> >
> > Seems I accidentally deleted a paragraph:
> >
> >
> >
> > The two frames are almost congruent in start and to a large percentage in
> duration, and as they are meant to present the same picture
> >
> >  it cannot happen at all that only one of them would have a hard change
> (like from white to black or something  appearing or disappearing within a
> short
> 
> video frames often are not sampled accross the whole period representing the
> frame. just look at some video with fast moving things
> 
> now what can give you a sub ms change
> a scene change, an instantaneos cut from one scene to another
> a flipped light switch, an explosion, an electric arc striking something
> a camera flash, a spinning wheel with hole or a black/white pattern
> a laser pointer just gently waving over your camera or something seen by
> your camera, a fast moving object between the camera and light source
> and many more

Yes, but that invalidates your claim that videos created from computer
generated sources would always be "exact". And in fact those things 
are done in a very different way. 

I'm familiar with 3D animations since the initial release of 3D Studio 
(the DOS based one, not Max), and some friends of mine are doing this
on a professional basis, so I'm pretty familiar with the procedures.

It is important to understand that 3D animations, with its tooling
and all the techniques involved are not doing simulations of reality.
Instead, it's only about creating visuals that look like reality as 
close as possible, and the methods involved are in many ways different
from the actual physics by which our reality is driven.

So, one important thing to note is, that it doesn't work in a way like 
you described above, that an animation is created once, independent 
of presentation framerate, and then you could create videos from 
that definition at arbitrary framerates - no way to do it like that,
given the way animations are done nowadays.

Just the opposite is true: I don't know any other video creation 
discipline where the individual output frame "raster" is more important
and getting more attention and individual work than with computer 
animations.
After the creative part (and other steps) are done, the whole
animation is undergoing another stage that is all about output
frames. Every single frame of the animation is rendered for the 
given output framerate, and all timings key-frame timings and
values are specifically adjusted for that output framerate.

There are many reasons why this needs to be done; for most, 
it burns down to avoiding visual artifacts, imperfections, 
"ghost" visuals and also for having "clean cuts" (intra-scene,
not the cutting that is done even later between scenes).

This is a tedious and time-consuming process which can take 
many weeks any a large team, depending on the length of the 
output.
When an output at a different framerate is needed, that whole
process needs to be repeated, and in the end you will
have a different animation definition for the new framerate
with hundreds or thousands of tiny differences.

Another point one might find interesting is that none of
those applications is working with time bases at precisions
like we are talking about here.

I'm still wondering whether somebody will get the twist...

softworkz








More information about the ffmpeg-devel mailing list