[Libav-user] Documentation on libav buffering behaviour when encoding live video

rob at meades.org rob at meades.org
Tue Dec 3 13:15:44 EET 2024


I am capturing YUV420 images with libcamera, have them arriving in a DMA 
buffer in a callback, and would like to pass those frames to libav* to 
be encoded as H.264 in a HLS stream.  I have the basics working, in that 
I can cause encoded video to land in a .ts file that can be read 
sensibly by a player, however it doesn't work for very long (just 45 
frames) and certainly isn't timed correctly.

What I'm looking for is documentation on how buffering works in libav, 
how the video encoding process, 
avcodec_send_frame()/avcodec_receive_packet(), either keeps a pointer to 
or copies or whatever the buffers of YUV that I'm passing to it, whether 
_I_ need to copy the image data out of the DMA buffers myself, whether I 
can expect avcodec_send_frame() to do that, whether I can configure the 
encoder behaviour to provide output using only my (four) DMA buffers 
directly in order to avoid a copy (I had been hoping to avoid a copy), 
etc.

All of the C++ examples I can find use generated static images or read 
stuff from OpenCV, etc., they don't really address the dynamics of 
buffering a live stream on a relatively constrained device (a PiZero2W).

Can anyone point me at FFMPEG documentation of that type?

My code so far here in case of questions:

https://github.com/RobMeades/watchdog/blob/96d7986de11e472d54ba86663a1b177701fde61b/software/watchdog.cpp#L556

Rob


More information about the Libav-user mailing list