[FFmpeg-user] ffmpeg architecture question
Paul B Mahol
onemda at gmail.com
Tue Apr 21 11:45:50 EEST 2020
On 4/20/20, Mark Filipak <markfilipak.windows+ffmpeg at gmail.com> wrote:
> Hi, Ted,
>
> On 04/20/2020 06:20 AM, Edward Park wrote:
>> Hey,
>>
>>>> I don't understand what you mean by "recursively".
>>>
>>> Haven't you heard? There's no recursion. There's no problem. The 'blend'
>>> filter just has some fun undocumented features. Hours and hours, days and
>>> days of fun. So much fun I just can't stand it. Too much fun.
>>
>> There's no recursion because a filtergraph is typically supposed to be a
>> directed acyclic graph, there is no hierarchy to traverse.
>
> Thank you. Yes, I see that now. I thought that filtergraphs recursed (and
> failed in this case)
> because when I placed 'datagraph' filters into the filtergraph, I saw only
> the frames that succeeded
> (i.e., made their way to the output), not all frames -- 'datagraph' doesn't
> work like an
> oscilloscope -- so the behavior appeared to be failure to recurse. I did not
> try splitting out a
> separate stream and 'map'ping it to a 2nd output video as you suggested --
> thank you -- but I trust
> that technique works and I will use it in the future.
>
>> Blend not specifying which of the two input frames it takes the timestamps
>> from is true enough, except the only reason it poses a problem is because
>> it leads to another filter getting two frames with the exact same
>> timestamp, as they were split earlier on in the digraph. And it's not
>> obvious by any means, but you can sort of deduce that blend will take the
>> timestamps from the first input stream, blend having a "top" and "bottom"
>> stream (I mean on the z-axis, lest this cause any more confusion) kind of
>> implies similar operation to the overlay filter applied on the two inputs
>> that each go through some other filter, with an added alpha channel, and
>> the description for the overlay filter says the first input is the "main"
>> that the second "overlay" is composited on.
>
> I now appreciate that 'blend' has a "preferred" input similar to 'overlay',
> but that behavior is not
> documented. In the case of 'overlay', the name "main" doesn't convey that
> meaning, and in the case
> of 'blend', that behavior is not documented at all. Both documentations
> should explain how
> timestamps control output and that the 1st filter-input's timestamp
> determines the filter-output's
> timestamp.
Blend filter does not have preferred input since long time.
If I received coin for single misinformation you and others posted in
this thread
I would already be very rich person.
Sometimes is simply best to leave **** to be at top.
>
>> On a different note, in the interest of making the flow of frames within
>> the filtergraph something simple enough to picture using my rather simple
>> brain ...
>
> You are far too modest, Ted. ;-)
>
>>... this is my attempt at simplifying a filtergraph you posted a while ago,
>> I'm not sure if it's accurate, but I can't tell if I'm reproducing the
>> same result even when frame stepping (because to compare frame by frame, I
>> had to compare it to another telecine, and the only one I'd seen is the
>> 3-2 pulldown. And I really cannot tell the difference when playing at
>> speed, I can tell them apart if I step frame by frame, but not identify
>> which is which, had to draw a label on them)
>
> I single step through frames to see the effect of the filter (which is
> targeted to filter solely
> (n+1)%5==3 frames, so is easy to distinguish by simply counting: 0... (step)
> 1... (step) 2... ). MPV
> permits such single-stepping. I don't know whether ffplay does.
>
>> Could you see if it actually does do the same thing?
>> telecine=pattern=5,select='n=2:e=ifnot(mod(mod(n,5)+1,3),1,2)'[C],split[AB_DE],select='not(mod(n+3,4))'[B],[C][B]blend[B/C],[AB_DE][B/C]interleave
>
> "do the same thing"? Do you mean: Do the same thing as 23 pull-down?
> 23 pull-down: A B B+C C+D D E F F+G G+H H ... 30fps
> 55 pull-down: A A A+B B B C C C+D D D ... 60fps
>
> You see, it's for situations like this that I portray frames like this:
>
> |<--------------------------1/6s-------------------------->|
> [A/a__________][B/b__________][C/c__________][D/d__________]
> [A/a_______][B/b_______][B/c_______][C/d_______][D/d_______] 23-telecine
> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_] 55-telecine
>
> I find such timing diagrams to be simple to understand and unambiguous. They
> clearly show what
> happens to top/bottom half-pictures.
>
>> The pads are labeled according to an ABCDE pattern at the telecine, I
>> don't know if that makes sense or is correct at all.
>
> I believe that the names of pads are arbitrary. I use [A][B][C]... because
> they are easy to see and
> because they are compact.
>
>> It does make it possible to 4up 1920x1080 streams with different filters
>> and compare them in real time without falling below ~60fps. I think the
>> fact that "split" actually copies a stream, while "select" splits a stream
>> is kind of confusing now. "Select" also adds another stream of video but I
>> think splitting then using select with boolean expressions to discard the
>> not selected frames has to be wasteful.
>
> Is there any alternative? I think not. I seek to filter solely frames 2 7 12
> 17 ...etc. so I use
> (n+1)%5==3 (i.e., select='eq(mod(n+1\,5)\,3)').
> _______________________________________________
> ffmpeg-user mailing list
> ffmpeg-user at ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-request at ffmpeg.org with subject "unsubscribe".
More information about the ffmpeg-user
mailing list