[Libav-user] Video and audio timing / syncing
Brad O'Hearne
brado at bighillsoftware.com
Mon Apr 1 06:57:24 CEST 2013
On Mar 31, 2013, at 9:20 PM, Kalileo <kalileo at universalx.net> wrote:
>> In my specific use case, I had configured a minimum frame rate of 24 fps on my QTDecompressedVideoCaptureOutput, and so expecting that frame rate, I configured my codec context time_base.den to be 24 as well. What happened, however, is that despite being configured to output 24 fps, it a
>> ctually output fewer fps, and when that happened, even though the pts and dts values were the exact ones delivered on the sample buffers,
>
> Did you ever check the resulting video pts and dts values? When you say "the pts and dts values were the exact ones delivered on the sample buffers" you talk about the input, before encoding, and not the output, after encoding, right?
I checked them both. They are accurate when the time_base.den is hardcoded to the actual fps being received from capture.
> How can the encoder possibly know that it gets 15 fps instead of the promised 25? It needs the correct info to calculate the next DTS/PTS.
Therein lies the problem --the reality is that the actual fps is not known prior to encoding. So your question could be answered in another way: the capture frame rate is not known, nor can it be accurately known prior when the time_base.den needs to be set. Each frame however is delivered with accurate presentation time (based on time scale, not frame rate), decode time (based on time scale, not frame rate), and duration time (based on time scale, not frame rate).
I suppose my thought is this -- instead of requiring the frame rate to calculate the next DTS/PTS, why is it that given both the DTS and PTS and the duration, it needs the frame rate to calculate the next DTS/PTS? Frame rate doesn't seem necessary with that info -- doesn't DTS/PTS + duration allow the next DTS/PTS to be calculated, essentially realizing whatever frame rate is taking place?
Brad
More information about the Libav-user
mailing list