[Libav-user] Encoding a screenshot into a video using FFMPEG
Dolevo Jay
cmst at live.com
Thu Apr 18 10:39:35 CEST 2013
If you are thinking about commercial app, first, you must use x264 as an encoder, not ffmpeg. x264 is way optimized and the best encoder so far.
Secondly, you need to capture the screenshots in a thread because you can't capture fixed fps. with bitblt. Depending on the content of the desktop, you will sometimes have 10 fps, sometimes 20.
> Date: Wed, 17 Apr 2013 15:28:12 -0700
> From: phuze9 at gmail.com
> To: libav-user at ffmpeg.org
> Subject: Re: [Libav-user] Encoding a screenshot into a video using FFMPEG
>
> Thanks for your help, I discovered that my output video plays fine, but
> only in ffplay (in WMP/WMPClassic/VLC) it shows the gray distorted screen.
> I have made some changes to my code since the last update so I'll post what
> is working for me at the bottom of this. Can anyone think of a reason why
> the video would work well in ffplay and nothing else? Is there a setting
> that's causing this? I moved the screen capture code to another function,
> but it's similar with error checking for the getDIBits calls, which is
> where my earlier screenshot error was occurring, and also creating a
> COLORREF* rather than an RGBQUAD*.
>
> AVCodec* codec;
> AVCodecContext* c = NULL;
> uint8_t* outbuf;
> int i, out_size, outbuf_size;
>
> avcodec_register_all();
>
> printf("Video encoding\n");
>
> // Find the mpeg1 video encoder
> codec = avcodec_find_encoder(CODEC_ID_H264);
> if (!codec) {
> fprintf(stderr, "Codec not found\n");
> exit(1);
> }
> else printf("H264 codec found\n");
>
> c = avcodec_alloc_context3(codec);
>
> c->bit_rate = 400000;
> c->width = 1920; // resolution must be a multiple of two
> (1280x720),(1900x1080),(720x480)
> c->height = 1200;
> c->time_base.num = 1; // framerate numerator
> c->time_base.den = 25; // framerate denominator
> c->gop_size = 10; // emit one intra frame every ten frames
> c->max_b_frames = 1; // maximum number of b-frames between non b-frames
> //c->keyint_min = 1; // minimum GOP size
> //c->i_quant_factor = (float)0.71; // qscale factor between P and I frames
> //c->b_frame_strategy = 20;
> //c->qcompress = (float)0.6;
> //c->qmin = 20; // minimum quantizer
> //c->qmax = 51; // maximum quantizer
> //c->max_qdiff = 4; // maximum quantizer difference between frames
> //c->refs = 4; // number of reference frames
> //c->trellis = 1; // trellis RD Quantization
> c->pix_fmt = PIX_FMT_YUV420P;
> c->codec_id = CODEC_ID_H264;
> //c->codec_type = AVMEDIA_TYPE_VIDEO;
>
> // Open the encoder
> if (avcodec_open2(c, codec,NULL) < 0) {
> fprintf(stderr, "Could not open codec\n");
> exit(1);
> }
> else printf("H264 codec opened\n");
>
> outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3); // alloc
> image and output buffer
> outbuf = static_cast<uint8_t *>(malloc(outbuf_size));
> printf("Setting buffer size to: %d\n",outbuf_size);
>
> FILE* f = fopen("example.mpg","wb");
> if(!f) printf("x - Cannot open video file for writing\n");
> else printf("Opened video file for writing\n");
>
> // encode 5 seconds of video
> for(i=0;i<STREAM_FRAME_RATE*STREAM_DURATION;i++) {
> fflush(stdout);
>
> screenCap();
>
>
> int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
> uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
>
> AVFrame* inpic = avcodec_alloc_frame();
> AVFrame* outpic = avcodec_alloc_frame();
>
>
> outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) *
> 90);
> avpicture_fill((AVPicture*)inpic, (uint8_t*)pPixels, PIX_FMT_RGB32,
> c->width, c->height); // Fill picture with image
>
> avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width,
> c->height);
> av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
> c->pix_fmt, 1);
>
> inpic->data[0] += inpic->linesize[0]*(screenHeight-1); // Flipping frame
> inpic->linesize[0] = -inpic->linesize[0]; // Flipping frame
>
> struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight,
> PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR,
> NULL, NULL, NULL);
>
> sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height,
> outpic->data, outpic->linesize);
>
> // encode the image
> out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
> printf("Encoding frame %3d (size=%5d)\n", i, out_size);
> fwrite(outbuf, 1, out_size, f);
> delete [] pPixels;
> av_free(outbuffer);
> av_free(inpic);
> av_free(outpic);
> }
>
> // get the delayed frames
> for(; out_size; i++) {
> fflush(stdout);
>
> out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
> printf("Writing frame %3d (size=%5d)\n", i, out_size);
> fwrite(outbuf, 1, out_size, f);
> }
>
> // add sequence end code to have a real mpeg file
> outbuf[0] = 0x00;
> outbuf[1] = 0x00;
> outbuf[2] = 0x01;
> outbuf[3] = 0xb7;
> fwrite(outbuf, 1, 4, f);
> fclose(f);
>
> avcodec_close(c);
> free(outbuf);
> av_free(c);
> printf("Closed codec and Freed\n");
>
>
>
> On Wed, Apr 17, 2013 at 3:50 AM, Steffen Ebersbach-2 [via libav-users] <
> ml-node+s943685n4657296h60 at n4.nabble.com> wrote:
>
> > Hi
> >
> >
> > this is my code for making a video from a screenshot. I do not use the
> > hbitmap directly, but rather a gdi element. But i think this is not
> > necessary i do this or other reasons. One Idea, because in your last
> > picture the shell window is shown but the visual studio window is
> > missing, can be that the copy desktop / window function from windows
> > can't capy elements from directx and so on , only nativ stuff. This is
> > because in most remote control softwares, no video is transmitted.
> > I hope the codes help you.
> >
> > Steffen
> >
> > --------------------------------------------------
> > HWND window;
> > // get hwnd from your window / desktop / ...
> >
> > srcDC = GetDC(window);
> > tarDC = CreateCompatibleDC(srcDC);
> > m_pal = (HPALETTE) GetCurrentObject(srcDC, OBJ_PAL); //colors
> >
> > HBITMAP obmp, hbmp = CreateCompatibleBitmap(srcDC, pwidth , pheight);
> > obmp = (HBITMAP) SelectObject(tarDC, hbmp); //connect hbmp to DC
> >
> > res = SendMessage(window,WM_PRINT, (WPARAM)tarDC,(LPARAM) PRF_CLIENT |
> > ~PRF_ERASEBKGND); // Copy BMP
> > tarbmp = Gdiplus::Bitmap::FromHBITMAP(hbmp,m_pal);
> >
> > SelectObject(tarDC,obmp);
> > DeleteObject(hbmp);
> >
> >
> > //Stream and encoder
> > av_register_all();
> > avcodec_init();
> >
> > AVFormatContext *focontext;
> > AVStream *videostm;
> > AVCodec *videocodec;
> > AVCodecContext *videocontext;
> > AVFrame *aktframe;
> > uint8_t *framebuf1;
> > SwsContext *imgconvctx;
> >
> > //container
> > focontext = av_alloc_format_context();
> >
> > //videostream
> > videostm = av_new_stream(focontext,0);
> > // define your parameters here
> >
> > videocodec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
> > avcodec_open(videocontext, videocodec);
> >
> > focontext->video_codec_id = videocontext->codec_id;
> > av_set_parameters(focontext,0);
> >
> >
> > //frame
> > aktframe = avcodec_alloc_frame();
> > picsize = avpicture_get_size(PIX_FMT_YUV420P, width, height);
> > framebuf1 = (uint8_t*) av_malloc(picsize);
> > avpicture_fill( (AVPicture*)aktframe, framebuf1, PIX_FMT_YUV420P, width,
> > height);
> >
> > imgconvctx = sws_getContext(width, height ,PIX_FMT_BGR24 ,width,
> > height,PIX_FMT_YUV420P, SWS_BICUBIC , 0,0,0);
> >
> > //convert bmp
> > uint8_t *inbuffer;
> > Gdiplus::BitmapData inbmpdata;
> > Gdiplus::Rect cltrct(0,0,width,height);
> >
> > newbmp->LockBits(&cltrct, Gdiplus::ImageLockModeRead
> > ,PixelFormat24bppRGB , &inbmpdata);
> >
> > inbuffer = (uint8_t*) inbmpdata.Scan0;
> > AVFrame *inframe = 0;
> >
> > inframe = avcodec_alloc_frame();
> > avpicture_fill( (AVPicture*)inframe, inbuffer, PIX_FMT_BGR24, width,
> > height);
> > sws_scale(imgconvctx,inframe->data , inframe->linesize,0, height,
> > aktframe->data , aktframe->linesize);
> > int videobuf_size = videocontext->bit_rate *2;
> >
> > //open output
> > url_fopen(&focontext->pb,"file.mpg" , URL_WRONLY)
> > av_write_header(focontext);
> >
> > //encode and write frame
> > av_init_packet(&avpkt);
> > encsize = avcodec_encode_video(videocontext, videobuf ,videobuf_size,
> > aktframe);
> > avpkt.stream_index= videostm->index;
> > avpkt.data= videobuf;
> > avpkt.size= encsize;
> >
> > av_write_frame(focontext, &avpkt);
> >
> > // next frames
> > _______________________________________________
> > Libav-user mailing list
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=4657296&i=0>
> > http://ffmpeg.org/mailman/listinfo/libav-user
> >
> >
> > ------------------------------
> > If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://libav-users.943685.n4.nabble.com/Encoding-a-screenshot-into-a-video-using-FFMPEG-tp4657231p4657296.html
> > To unsubscribe from Encoding a screenshot into a video using FFMPEG, click
> > here<http://libav-users.943685.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4657231&code=cGh1emU5QGdtYWlsLmNvbXw0NjU3MjMxfC05NjM0NDkwOTQ=>
> > .
> > NAML<http://libav-users.943685.n4.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
> >
>
>
>
>
> --
> View this message in context: http://libav-users.943685.n4.nabble.com/Encoding-a-screenshot-into-a-video-using-FFMPEG-tp4657231p4657301.html
> Sent from the libav-users mailing list archive at Nabble.com.
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20130418/9a92dc69/attachment.html>
More information about the Libav-user
mailing list