[Libav-user] yadif deinterlace how?
Stefano Sabatini
stefano.sabatini-lala at poste.it
Sat Jun 18 15:54:43 CEST 2011
On date Saturday 2011-06-18 13:13:32 +0200, Robert Nagy encoded:
> An update, with less questions.
>
> struct deinterlacer::implementation
> {
> std::shared_ptr<AVFilterGraph> graph_;
> std::queue<std::shared_ptr<AVFrame>> out_buffer_;
>
> implementation(size_t width, size_t height, PixelFormat pix_fmt)
> : graph_(avfilter_graph_alloc(), [](AVFilterGraph*
> p){avfilter_graph_free(&p);})
> {
> // The filter contexts doesn't need to be freed? Does avfilter_graph_free
> handle this?
yes
> AVFilterContext* video_in_filter;
> AVFilterContext* video_yadif_filter;
> AVFilterContext* video_out_filter;
>
> // What is this "buffer" filter and why do I need to configure it with
> time_base?
Read the docs and the vsrc_buffer.h interface. It's used to fill a
buffer graph in a convenient way, as all the lowlevel details are
hidden inside.
time_base is required since the provided AVFrame or AVFilterBufferRef
doesn't contain the time_base information, which is *required* for
correctly interpreting the passed PTS (presentation timestamps).
> char args[256];
> std::printf(args, "%d:%d:%d:%d:%d", width, height, pix_fmt, 0, 0); // is 0,
> 0 ok?
>
> avfilter_graph_create_filter(&video_in_filter,
> avfilter_get_by_name("buffer"),
> "src", args, NULL, graph_.get());
> avfilter_graph_create_filter(&video_yadif_filter,
> avfilter_get_by_name("yadif"), "deinterlace", NULL, NULL, graph_.get());
> avfilter_graph_create_filter(&video_out_filter,
> avfilter_get_by_name("nullsink"),
> "out", NULL, NULL, graph_.get());
>
> avfilter_graph_add_filter(graph_.get(), video_in_filter);
> avfilter_graph_add_filter(graph_.get(), video_yadif_filter);
> avfilter_graph_add_filter(graph_.get(), video_out_filter);
> avfilter_graph_config(graph_.get(), NULL);
> }
>
> std::shared_ptr<AVFrame> execute(const safe_ptr<AVFrame>& frame)
> {
> if(!out_buffer_.empty())
> {
> auto result = out_buffer_.front();
> out_buffer_.pop();
> return result;
> }
>
> AVFilterLink* out = graph_->filters[graph_->filter_count-1]->inputs[0]; //
> last filter input
>
> // How do I send frames into the filter graph?
> // Is av_vsrc_out_buffer_add_frame removed? What should I use instead?
> //av_vsrc_out_buffer_add_frame(video_in_filter, frame, 0);
No, but the syntax recently changed, check latest git and
libavfilter/vsrc_buffer.h and libavfilter/avcodec.h.
> // Is this the correct way to read filter output?
> int ret = avfilter_poll_frame(out) ;
> for(int n = 0; n < ret; ++n)
> out_buffer_.push(get_frame(out));
> }
>
> std::shared_ptr<AVFrame> get_frame(AVFilterLink* link)
> {
> avfilter_request_frame(link);
looks so. In alternative you may use a sink buffer (yet to be
committed, see cmdutils.c in the FFmpeg source).
>
> auto buf = link->cur_buf;
>
> std::shared_ptr<AVFrame> result(avcodec_alloc_frame(), av_free);
> avcodec_get_frame_defaults(result.get());
> result->format = buf->format;
> result->width = buf->video->w;
> result->height = buf->video->h;
>
> assert(sizeof(result->linesize) == sizeof(buf->linesize));
> memcpy(result->linesize, buf->linesize, sizeof(buf->linesize)/sizeof(int));
>
> // Copy buf->data to result->data, is there any function that does this for
> me?
What are you trying to do?
More information about the Libav-user
mailing list