[FFmpeg-user] Sync troubles with input pipes from raw -> AAC/Scaling/H.264 -> live RTMP
Heinrich Fink
heinrich.fink at hfink.eu
Sat Mar 23 16:02:43 CET 2013
Hi,
I have troubles with A/V sync for a live RTMP output stream generated by ffmpeg. After playing around with the parameters for a few days now, I still wasn’t able to get rid of the sync issues. I hope that someone on the list could spot an issue in my command or advise me in how to avoid sync troubles. I would extremely appreciate any help here.
Here’s the story: In order to publish a live RTMP stream, I use two named input pipes with raw video and raw audio, this is coming from an application that is only able to output raw formats (in “realtime” pace). Ffmpeg is used for encoding audio to AAC, de-interlacing, resizing and encoding video to H.264. Finally, both streams are muxed into flv and published via RTMP. In general, this works very well, and the results look great. However, unfortunately A/V seems to drift after running for a couple of hours.
Here’s the full command:
ffmpeg \
-vsync cfr \
-c:v rawvideo\
-top:v 1 \
-video_size 1920x1080 \
-pixel_format yuv420p \
-f rawvideo \
-fflags nobuffer \
-i /tmp/videopipe \
-c:a pcm_s16le \
-ac 2 \
-ar 48000 \
-f s16le \
-fflags nobuffer \
-analyzeduration 0 \
-i /tmp/audiopipe \
-c:v libx264 \
-thread_type:v slice+frame \
-map 0:0,1:0 \
-map 1:0 \
-loglevel info \
-vf "yadif=0:-1:0, scale=640:360" \
-af "aresample=async=1" \
-profile:v main \
-preset:v faster \
-tune zerolatency \
-g 100 \
-x264opts level=3.1:bitrate=1000:vbv-bufsize=2000:vbv-maxrate=1000 \
-strict -2 \
-acodec aac \
-ab 192000 \
-ac 2 \
-ar 48000 \
-r 25 \
-fflags nobuffer \
-f flv rtmp://[URL]
I have tried several vsync settings and “map” variations. I believe the above command should try to sync the video stream to the audio stream (using the -map parameters, in case frames need to be duped or dropped), which seems most sensible for our scenario. I have tried it the other around as well, with no different results, i.e. A/V is still drifting.
This makes me believe that maybe feeding two different named pipes as input is not as reliable as I thought. I was working under the assumption that the A/V input stream’s time stamps are generated simply by counting the audio or video samples that are read through the pipe. As I am able to confirm that the source application feeds exactly the right amount of audio into the audio pipe for each video frame that is put into the video pipe, I would think that ffmpeg’s view of the input pipe’s timestamps is the same as the source application sees it, and therefore A/V should stay in sync. So if ffmpeg sees those timestamps drifting later in the process, because of “congestion” at the output, or hickups of the encoder, I would expect ffmpeg to correct this according to the vsync/map/aresample policy. Am I misunderstanding ffmpeg’s timing here?
Also, I am a bit unsure whether I need the “-re” (realtime) flags for the input pipes or not. I am currently leaving them out, because the source application is “live” already, i.e. it can’t feed the pipes faster than realtime, so “-re” shouldn’t be necessary.
Finally, I was thinking if I could avoid the troubles of using two different input pipes if there was something like a “raw” A/V interleaved format, where I could feed audio and video both through a single pipe. I don’t think there is such a thing in ffmpeg, is there? Would it make sense to contribute something like that?
Does anyone have some advice to the sync issues or whether using two input pipes is a valid approach?
thanks,
Heinrich
More information about the ffmpeg-user
mailing list