[Libav-user] Is a buffer required while streaming using ffmpeg code?
Navin
nkipe at tatapowersed.com
Wed Nov 28 12:25:37 CET 2012
As per the ffmpeg tutorial code, for code like this:/
while(av_read_frame_proc(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStream)
{
avcodec_decode_video2_proc(pCodecCtx, pFrame, &frameFinished,
&packet);// Decode video frame
if (frameFinished)
{
sws_scale_proc(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data,
pFrameRGB->linesize );/
As a general question, if I'm receiving realtime webcam frames or from a
remote video file from a very fast network and I receive frames faster
than the framerate of the video, I'd need to buffer the frames in my own
datastructure. But if my datastructure is small, then won't I lose a lot
of frames that didn't get buffered? I'm asking because I read somewhere
that ffmpeg buffers a video internally. How does this buffering happen?
How would that buffering be any different than if I implemented my own
buffer? The chances that I'd lose frames is very real, isn't it? Any
chance I could read up or see some source code about this?
Or does all this depend on the streaming protocol being used?
--
Navin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20121128/9c7b8efd/attachment.html>
More information about the Libav-user
mailing list