[FFmpeg-devel] [PATCH] reitnerlace - tinterlace-like filter under LGPL
Thomas Mundt
tmundt75 at gmail.com
Wed Feb 21 21:20:42 EET 2018
Hi,
2018-02-12 16:37 GMT+01:00 Vasile Toncu <vasile.toncu at tremend.com>:
> Hello,
>
> there have been some discussions about tinterlace filter licensing. In the
> end, I was unable to contact all the authorship holders.
>
> The main author, one from MPlayer project, is Michael Zucchi. It is quite
> probably that the copyright is holden by the company that he worked for,
> Ximian, which no longer exists. It is less probably that I'll came up with
> the approval off all the parts involved in this deal.
>
> However, some of the later developers of tinterlace agreed to release the
> parts they wrote under LGPL. I mention here Thomas Mundt and Stefano
> Sabatini.
>
> This being said, I come up with a new filter - reinterlace - which
> implements all the tinterlace functionalities and adds a few more.
>
> The new filter is added to ffmpeg without --enable-gpl and/or
> --enable-nonfree. However, it these configure options are specified, the
> reinterlace will use ASM opts, imported from tinterlace. I've used support
> for 16bit depth video from the code written by Thomas Mundt. I added 2 new
> modes MERGE_BFF and MERGE_TFF. I've changed MODE_PAD, so it does not drop
> last frame from the input - tinterlace did so.
>
> In terms of performance, reinterlace gives basically the same fps as
> tinterlace does.
>
> Here is the patch thats adds the filter. If everything goes well with this
> patch, I'll add a new patch that changes current tinterlace with
> reinterlace.
>
since I´m maintainer of the tinterlace filter I should review your patch.
Unfortunately I don´t have a possibility to compile or test it ATM.
So the review is incomplete and following comments are only parts of the
changes that might be necessary.
Also I don´t know which requirements have to be fulfilled for porting code
from GPL to LGPL.
I will need help from more experienced ffmpeg developers.
> Thanks,
>
> -Vasile Toncu
>
> From 45010f4b4671edfe1318b84285d09dd28a882d63 Mon Sep 17 00:00:00 2001
> From: Vasile Toncu <vasile.toncu at tremend.com>
> Date: Mon, 12 Feb 2018 14:16:27 +0200
> Subject: [PATCH] Added reitnerlace filter.
>
> ---
> libavfilter/Makefile | 1 +
> libavfilter/allfilters.c | 1 +
> libavfilter/reinterlace.h | 141 +++++++
> libavfilter/vf_reinterlace.c | 773
> ++++++++++++++++++++++++++++++++++
> libavfilter/x86/Makefile | 1 +
> libavfilter/x86/vf_reinterlace_init.c | 101 +++++
> 6 files changed, 1018 insertions(+)
> create mode 100644 libavfilter/reinterlace.h
> create mode 100644 libavfilter/vf_reinterlace.c
> create mode 100644 libavfilter/x86/vf_reinterlace_init.c
>
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 6a60836..c3095ba 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -286,6 +286,7 @@ OBJS-$(CONFIG_RANDOM_FILTER) += vf_random.o
> OBJS-$(CONFIG_READEIA608_FILTER) += vf_readeia608.o
> OBJS-$(CONFIG_READVITC_FILTER) += vf_readvitc.o
> OBJS-$(CONFIG_REALTIME_FILTER) += f_realtime.o
> +OBJS-$(CONFIG_REINTERLACE_FILTER) += vf_reinterlace.o
> OBJS-$(CONFIG_REMAP_FILTER) += vf_remap.o framesync.o
> OBJS-$(CONFIG_REMOVEGRAIN_FILTER) += vf_removegrain.o
> OBJS-$(CONFIG_REMOVELOGO_FILTER) += bbox.o lswsutils.o
> lavfutils.o vf_removelogo.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 9adb109..60fb9b5 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -295,6 +295,7 @@ static void register_all(void)
> REGISTER_FILTER(READEIA608, readeia608, vf);
> REGISTER_FILTER(READVITC, readvitc, vf);
> REGISTER_FILTER(REALTIME, realtime, vf);
> + REGISTER_FILTER(REINTERLACE, reinterlace, vf);
> REGISTER_FILTER(REMAP, remap, vf);
> REGISTER_FILTER(REMOVEGRAIN, removegrain, vf);
> REGISTER_FILTER(REMOVELOGO, removelogo, vf);
> diff --git a/libavfilter/reinterlace.h b/libavfilter/reinterlace.h
> new file mode 100644
> index 0000000..bb66f63
> --- /dev/null
> +++ b/libavfilter/reinterlace.h
> @@ -0,0 +1,141 @@
> +/*
> + * Copyright (c) 2017 Vasile Toncu <toncu.vasile at gmail.com>
> + Copyright (c) 2017 Thomas Mundt <tmundt75 at gmail.com>
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street
> <https://maps.google.com/?q=51+Franklin+Street&entry=gmail&source=g>,
> Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +#include <stdint.h>
> +
> +#include "avfilter.h"
> +#include "formats.h"
> +#include "internal.h"
> +#include "video.h"
> +#include "libavutil/avassert.h"
> +#include "libavutil/imgutils.h"
> +#include "libavutil/opt.h"
> +#include "libavutil/pixdesc.h"
> +
> +#include "libavutil/bswap.h"
> +
> +enum FilterMode {
> + MODE_MERGE,
> + MODE_DROP_EVEN,
> + MODE_DROP_ODD,
> + MODE_PAD,
> + MODE_INTERLEAVE_TOP,
> + MODE_INTERLEAVE_BOTTOM,
> + MODE_INTERLACE_X2,
> + MODE_MERGE_X2,
> + MODE_MERGE_TFF,
> + MODE_MERGE_BFF,
> + MODE_NB
> +};
> +
> +enum FilterFlags {
> + FLAG_NOTHING = 0x00,
> + FLAG_VLPF = 0x01,
> + FLAG_EXACT_TB = 0x02,
> + FLAG_CVLPF = 0x04,
> + FLAG_NB
> +};
> +
> +static const AVRational standard_tbs[] = {
> + {1, 25},
> + {1, 30},
> + {1001, 30000},
> +};
> +
> +typedef struct {
> + const AVClass *class;
> + int mode;
> + int flags;
> +
> + AVFrame *prev_frame, *current_frame;
> + int64_t current_frame_index;
> +
> + uint8_t *black_vec[4];
> +
> + int skip_next_frame;
> +
> + void *thread_data;
> +
> + uint8_t bit_depth;
> +
> + void (*lowpass_line)(uint8_t *dstp, ptrdiff_t width, const uint8_t
> *srcp,
> + ptrdiff_t mref, ptrdiff_t pref, int clip_max);
> +
> + AVRational preout_time_base;
> +
> +} ReInterlaceContext;
> +
> +#if CONFIG_GPL
> +void ff_reinterlace_init_x86(ReInterlaceContext *reinterlace);
> +#endif
> +
> +#define OFFSET(x) offsetof(ReInterlaceContext, x)
> +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
> +
> +static const AVOption reinterlace_options[] = {
> + { "mode", "set mode", OFFSET(mode), AV_OPT_TYPE_INT,
> {.i64=MODE_MERGE}, 0, MODE_NB - 1, FLAGS, "mode" },
> + { "merge", "merge frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_MERGE}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "drop_even", "drop even frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_DROP_EVEN}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "drop_odd", "drop odd frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_DROP_ODD}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "pad", "pad lines of a frame with black
> lines", 0, AV_OPT_TYPE_CONST,
> {.i64=MODE_PAD}, INT_MIN, INT_MAX, FLAGS, "mode"},
> + { "interleave_top", "interleave top and bottom frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_INTERLEAVE_TOP}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "interleave_bottom", "interleave bottom and top frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_INTERLEAVE_BOTTOM}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "interlacex2", "interlace consecutive frames", 0,
> AV_OPT_TYPE_CONST, {.i64=MODE_INTERLACE_X2}, INT_MIN, INT_MAX,
> FLAGS, "mode"},
> + { "mergex2", "just like merge, but at the same frame
> rate", 0, AV_OPT_TYPE_CONST, {.i64=MODE_MERGE_X2},
> INT_MIN, INT_MAX, FLAGS, "mode"},
> + { "merge_tff", "merge frames using top_field_first
> information", 0, AV_OPT_TYPE_CONST,
> {.i64=MODE_MERGE_TFF}, INT_MIN, INT_MAX, FLAGS, "mode"},
> + { "merge_bff", "Mmerge frames using top_field_first
> information", 0, AV_OPT_TYPE_CONST,
> {.i64=MODE_MERGE_BFF}, INT_MIN, INT_MAX, FLAGS, "mode"},
> +
> + { "flags", "add flag for reinterlace", OFFSET(flags),
> AV_OPT_TYPE_INT, {.i64=FLAG_NOTHING}, 0, 0xFF, FLAGS, "flags" },
> + { "low_pass_filter", "low pass fitler", 0,
> AV_OPT_TYPE_CONST, {.i64 = FLAG_VLPF}, INT_MIN, INT_MAX, FLAGS, "flags"},
> + { "vlpf", "low pass filter", 0,
> AV_OPT_TYPE_CONST, {.i64 = FLAG_VLPF}, INT_MIN, INT_MAX, FLAGS, "flags"},
> + { "complex_filter", "enable complex vertical low-pass
> filter", 0, AV_OPT_TYPE_CONST, {.i64 = FLAG_CVLPF},INT_MIN, INT_MAX,
> FLAGS, "flags" },
> + { "cvlpf", "enable complex vertical low-pass
> filter", 0, AV_OPT_TYPE_CONST, {.i64 = FLAG_CVLPF},INT_MIN, INT_MAX,
> FLAGS, "flags" },
> + { "exact_tb", "force a timebase which can represent
> timestamps exactly", 0, AV_OPT_TYPE_CONST, {.i64 = FLAG_EXACT_TB}, INT_MIN,
> INT_MAX, FLAGS, "flags" },
> + { NULL }
> +};
> +
> +AVFILTER_DEFINE_CLASS(reinterlace);
> +
> +#define IS_ODD(value) (value & 1)
> +
> +typedef struct ReInterlaceThreadData {
> + AVFrame *out, *first, *second;
> + int plane;
> + ReInterlaceContext *reinterlace;
> +
> + int scale_w_plane12_factor;
> + int scale_h_plane12_factor;
> +
> +} ReInterlaceThreadData;
> +
> +static enum AVPixelFormat all_pix_fmts[] = {
> + AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
> + AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
> + AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P,
> + AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_YUV422P10LE,
> + AV_PIX_FMT_YUV440P10LE, AV_PIX_FMT_YUV444P10LE,
> + AV_PIX_FMT_YUV420P12LE, AV_PIX_FMT_YUV422P12LE,
> + AV_PIX_FMT_YUV440P12LE, AV_PIX_FMT_YUV444P12LE,
> + AV_PIX_FMT_YUVA420P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA444P,
> + AV_PIX_FMT_YUVA420P10LE, AV_PIX_FMT_YUVA422P10LE,
> AV_PIX_FMT_YUVA444P10LE,
> + AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ444P,
> AV_PIX_FMT_YUVJ440P,
> + AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE
> +};
> diff --git a/libavfilter/vf_reinterlace.c b/libavfilter/vf_reinterlace.c
> new file mode 100644
> index 0000000..13330c0
> --- /dev/null
> +++ b/libavfilter/vf_reinterlace.c
> @@ -0,0 +1,773 @@
> +/*
> + * Copyright (c) 2017 Vasile Toncu <toncu.vasile at gmail.com>
> + * Copyright (c) 2017 Thomas Mundt <tmundt75 at gmail.com>
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street
> <https://maps.google.com/?q=51+Franklin+Street&entry=gmail&source=g>,
> Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Reinterlace filter
> + */
> +
> +
> +#include "reinterlace.h"
> +
> +
> +
> +static av_cold int init(AVFilterContext *ctx)
> +{
> + ReInterlaceContext *reinterlace = ctx->priv;
> + int i;
> +
> + for (i = 0; i < 4; i++)
> + reinterlace->black_vec[i] = NULL;
> +
> + reinterlace->thread_data = av_malloc(4 *
> sizeof(ReInterlaceThreadData));
> +
> + return 0;
> +}
> +
> +static int query_formats(AVFilterContext *ctx)
> +{
> +
> + AVFilterFormats *fmts_list;
> +
> + fmts_list = ff_make_format_list(all_pix_fmts);
> +
> + if (!fmts_list)
> + return AVERROR(ENOMEM);
> +
> + return ff_set_common_formats(ctx, fmts_list);
> +}
> +
> +static void lowpass_line_c(uint8_t *dstp, ptrdiff_t width, const uint8_t
> *srcp,
> + ptrdiff_t mref, ptrdiff_t pref, int clip_max)
> +{
> + const uint8_t *srcp_above = srcp + mref;
> + const uint8_t *srcp_below = srcp + pref;
> + int i;
> + for (i = 0; i < width; i++) {
> + // this calculation is an integer representation of
> + // '0.5 * current + 0.25 * above + 0.25 * below'
> + // '1 +' is for rounding.
> + dstp[i] = (1 + srcp[i] + srcp[i] + srcp_above[i] + srcp_below[i])
> >> 2;
> + }
> +}
> +
> +static void lowpass_line_c_16(uint8_t *dst8, ptrdiff_t width, const
> uint8_t *src8,
> + ptrdiff_t mref, ptrdiff_t pref, int
> clip_max)
> +{
> + uint16_t *dstp = (uint16_t *)dst8;
> + const uint16_t *srcp = (const uint16_t *)src8;
> + const uint16_t *srcp_above = srcp + mref / 2;
> + const uint16_t *srcp_below = srcp + pref / 2;
> + int i, src_x;
> + for (i = 0; i < width; i++) {
> + // this calculation is an integer representation of
> + // '0.5 * current + 0.25 * above + 0.25 * below'
> + // '1 +' is for rounding.
> + src_x = av_le2ne16(srcp[i]) << 1;
> + dstp[i] = av_le2ne16((1 + src_x + av_le2ne16(srcp_above[i])
> + + av_le2ne16(srcp_below[i])) >> 2);
> + }
> +}
> +
> +static void lowpass_line_complex_c(uint8_t *dstp, ptrdiff_t width, const
> uint8_t *srcp,
> + ptrdiff_t mref, ptrdiff_t pref, int
> clip_max)
> +{
> + const uint8_t *srcp_above = srcp + mref;
> + const uint8_t *srcp_below = srcp + pref;
> + const uint8_t *srcp_above2 = srcp + mref * 2;
> + const uint8_t *srcp_below2 = srcp + pref * 2;
> + int i, src_x, src_ab;
> + for (i = 0; i < width; i++) {
> + // this calculation is an integer representation of
> + // '0.75 * current + 0.25 * above + 0.25 * below - 0.125 * above2
> - 0.125 * below2'
> + // '4 +' is for rounding.
> + src_x = srcp[i] << 1;
> + src_ab = srcp_above[i] + srcp_below[i];
> + dstp[i] = av_clip_uint8((4 + ((srcp[i] + src_x + src_ab) << 1)
> + - srcp_above2[i] - srcp_below2[i]) >> 3);
> + // Prevent over-sharpening:
> + // dst must not exceed src when the average of above and below
> + // is less than src. And the other way around.
> + if (src_ab > src_x) {
> + if (dstp[i] < srcp[i])
> + dstp[i] = srcp[i];
> + } else if (dstp[i] > srcp[i])
> + dstp[i] = srcp[i];
> + }
> +}
> +
> +static void lowpass_line_complex_c_16(uint8_t *dst8, ptrdiff_t width,
> const uint8_t *src8,
> + ptrdiff_t mref, ptrdiff_t pref,
> int clip_max)
> +{
> + uint16_t *dstp = (uint16_t *)dst8;
> + const uint16_t *srcp = (const uint16_t *)src8;
> + const uint16_t *srcp_above = srcp + mref / 2;
> + const uint16_t *srcp_below = srcp + pref / 2;
> + const uint16_t *srcp_above2 = srcp + mref;
> + const uint16_t *srcp_below2 = srcp + pref;
> + int i, dst_le, src_le, src_x, src_ab;
> + for (i = 0; i < width; i++) {
> + // this calculation is an integer representation of
> + // '0.75 * current + 0.25 * above + 0.25 * below - 0.125 * above2
> - 0.125 * below2'
> + // '4 +' is for rounding.
> + src_le = av_le2ne16(srcp[i]);
> + src_x = src_le << 1;
> + src_ab = av_le2ne16(srcp_above[i]) + av_le2ne16(srcp_below[i]);
> + dst_le = av_clip((4 + ((src_le + src_x + src_ab) << 1)
> + - av_le2ne16(srcp_above2[i])
> + - av_le2ne16(srcp_below2[i])) >> 3, 0, clip_max);
> + // Prevent over-sharpening:
> + // dst must not exceed src when the average of above and below
> + // is less than src. And the other way around.
> + if (src_ab > src_x) {
> + if (dst_le < src_le)
> + dstp[i] = av_le2ne16(src_le);
> + else
> + dstp[i] = av_le2ne16(dst_le);
> + } else if (dst_le > src_le) {
> + dstp[i] = av_le2ne16(src_le);
> + } else
> + dstp[i] = av_le2ne16(dst_le);
> + }
> +}
> +
> +/**
> + * alocate memory for a black frame
> + */
> +static int init_black_buffers(ReInterlaceContext *reinterlace,
> AVFilterLink *inlink, int format)
> +{
> + int black_vec_size = inlink->w * inlink->h * 3;
> + int val_black = 16;
> + int i;
> +
> + if (AV_PIX_FMT_YUVJ420P == format ||
> + AV_PIX_FMT_YUVJ422P == format ||
> + AV_PIX_FMT_YUVJ440P == format ||
> + AV_PIX_FMT_YUVJ444P == format) {
> +
> + val_black = 0;
> +
> + }
> +
> + for (i = 0; i < 4; i++) {
> + reinterlace->black_vec[i] = av_malloc(black_vec_size);
> +
> + if (!reinterlace->black_vec[i] )
> + return AVERROR(ENOMEM);
> +
> + memset(reinterlace->black_vec[i], (0 == i || 3 == i ? val_black
> : 128), black_vec_size);
> + }
>
This only seems to be correct for 8 bit. Did you test it with higher bit
depths and compared with the results of tinterlace filter?
> +
> + return 0;
> +}
> +
> +static int config_out_props(AVFilterLink *outlink)
> +{
> + AVFilterContext *ctx = outlink->src;
> + AVFilterLink *inlink = outlink->src->inputs[0];
> + ReInterlaceContext *reinterlace = ctx->priv;
> + const AVPixFmtDescriptor *fmt_desc = av_pix_fmt_desc_get(outlink->f
> ormat);
> + int reinterlace_mode = reinterlace->mode;
> + int ret;
> +
> + reinterlace->bit_depth = fmt_desc->comp[0].depth;
> + reinterlace->preout_time_base = inlink->time_base;
> +
> + switch (reinterlace_mode) {
> + case MODE_MERGE:
> + outlink->w = inlink->w;
> + outlink->h = 2 * inlink->h;
> + outlink->sample_aspect_ratio = av_mul_q(inlink->sample_aspect_ratio,
> av_make_q(2, 1));
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
> + case MODE_PAD:
> + outlink->w = inlink->w;
> + outlink->h = 2 * inlink->h;
> + outlink->sample_aspect_ratio = av_mul_q(inlink->sample_aspect_ratio,
> av_make_q(2, 1));
> +
> + ret = init_black_buffers(reinterlace, inlink, outlink->format);
> +
> + if (ret < 0)
> + return ret;
> +
> + break;
> +
> + case MODE_DROP_EVEN:
> + outlink->w = inlink->w;
> + outlink->h = inlink->h;
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
> + case MODE_DROP_ODD:
> + outlink->w = inlink->w;
> + outlink->h = inlink->h;
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
> + case MODE_INTERLEAVE_TOP:
> + outlink->w = inlink->w;
> + outlink->h = inlink->h;
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
> + case MODE_INTERLEAVE_BOTTOM:
> + outlink->w = inlink->w;
> + outlink->h = inlink->h;
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
>
These 4 cases are identical and could be combined.
> + case MODE_INTERLACE_X2:
> + outlink->w = inlink->w;
> + outlink->h = inlink->h;
> + reinterlace->preout_time_base.den *= 2;
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){2,1});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){1,2});
> + break;
> +
> + case MODE_MERGE_X2:
> + outlink->w = inlink->w;
> + outlink->h = 2 * inlink->h;
> + outlink->sample_aspect_ratio = av_mul_q(inlink->sample_aspect_ratio,
> av_make_q(2, 1));
> + outlink->frame_rate = inlink->frame_rate;
> + outlink->time_base = inlink->time_base;
> + break;
> +
> + case MODE_MERGE_BFF:
> + outlink->w = inlink->w;
> + outlink->h = 2 * inlink->h;
> + outlink->sample_aspect_ratio = av_mul_q(inlink->sample_aspect_ratio,
> av_make_q(2, 1));
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
> + case MODE_MERGE_TFF:
> + outlink->w = inlink->w;
> + outlink->h = 2 * inlink->h;
> + outlink->sample_aspect_ratio = av_mul_q(inlink->sample_aspect_ratio,
> av_make_q(2, 1));
> + outlink->frame_rate = av_mul_q(inlink->frame_rate,
> (AVRational){1,2});
> + outlink->time_base = av_mul_q(inlink->time_base ,
> (AVRational){2,1});
> + break;
> +
>
Same with MODE_MERGE_BFF and MODE_MERGE_TFF.
+ default:
> + av_log(ctx, AV_LOG_VERBOSE, "invalid value for mode");
> + av_assert0(0);
> +
> + }
> +
> + int i;
> + for (i = 0; i < FF_ARRAY_ELEMS(standard_tbs); i++) {
> + if (!av_cmp_q(standard_tbs[i], outlink->time_base))
> + break;
> + }
> + if (i == FF_ARRAY_ELEMS(standard_tbs) || (reinterlace->flags &
> FLAG_EXACT_TB) )
> + outlink->time_base = reinterlace->preout_time_base;
> +
> +
> + if (reinterlace->flags & FLAG_VLPF || reinterlace->flags &
> FLAG_CVLPF) {
> +
> + if (reinterlace_mode != MODE_INTERLEAVE_TOP && reinterlace_mode
> != MODE_INTERLEAVE_BOTTOM) {
> + reinterlace->flags &= ~(FLAG_VLPF | FLAG_CVLPF);
> + } else {
> + reinterlace->lowpass_line = (reinterlace->flags & FLAG_VLPF)
> ? lowpass_line_c : lowpass_line_complex_c;
> +
> + if (reinterlace->bit_depth > 8) {
> + reinterlace->lowpass_line = (reinterlace->flags &
> FLAG_VLPF) ? lowpass_line_c_16 : lowpass_line_complex_c_16;
> + }
>
Maybe if/else here.
+
> +#if CONFIG_GPL
> + if (ARCH_X86) {
> + ff_reinterlace_init_x86(reinterlace);
> + }
> +#endif
> + }
> +
> + }
> +
> + return 0;
> +}
> +
> +static int filter_frame_plane(AVFilterContext *ctx, void *arg, int
> jobnr, int nb_jobs)
> +{
> + // jobnr is usualy plane number
> + ReInterlaceThreadData *rtd = arg;
> + ReInterlaceContext *reinterlace = rtd->reinterlace;
> + AVFrame *first = rtd->first;
> + AVFrame *second = rtd->second;
> + AVFrame *out = rtd->out;
> +
> + int plane = rtd->plane;
> + int reinterlace_mode = reinterlace->mode;
> +
> + int x = (1 == plane || 2 == plane) ? rtd->scale_w_plane12_factor : 1;
> + int y = (1 == plane || 2 == plane) ? rtd->scale_h_plane12_factor : 1;
> + int i, ls_offset;
> + int offset1, offset2, offset3, offset4;
> +
> + const AVPixFmtDescriptor *fmt_desc = av_pix_fmt_desc_get(out->forma
> t);
> + int clip_max = (1 << fmt_desc->comp[plane].depth) - 1;
> +
> + switch (reinterlace_mode) {
> + case MODE_MERGE:
> + av_image_copy_plane(out->data[plane], 2 * out->linesize[plane],
> + first->data[plane], first->linesize[plane], first->width / x,
> first->height / y);
> + av_image_copy_plane(out->data[plane] + out->linesize[plane], 2 *
> out->linesize[plane],
> + second->data[plane], second->linesize[plane], second->width /
> x, second->height / y);
> + break;
> +
> + case MODE_PAD:
> + ls_offset = (reinterlace->current_frame_index & 1) ? 0 :
> out->linesize[plane];
> + av_image_copy_plane(out->data[plane] + ls_offset, 2 *
> out->linesize[plane],
> + second->data[plane], second->linesize[plane], second->width /
> x, second->height / y);
> + av_image_copy_plane(out->data[plane] + out->linesize[plane] -
> ls_offset, 2 * out->linesize[plane],
> + reinterlace->black_vec[plane], second->linesize[plane],
> second->width / x, second->height / y);
> + break;
> +
> + case MODE_INTERLEAVE_BOTTOM:
> + case MODE_INTERLEAVE_TOP:
> + y = y * 2;
> +
> + if (reinterlace->flags & FLAG_VLPF || reinterlace->flags &
> FLAG_CVLPF) {
> +
> + int lines, cols;
> + AVFrame *from_frame;
> + uint8_t *from, *to;
> + int from_step, to_step;
> +
> + lines = (MODE_INTERLEAVE_TOP == reinterlace_mode) ? (2 *
> out->height / y + 1) / 2 : (2 * out->height / y + 0) / 2;
> + cols = out->width / x;
> + from_frame = first;
> + from = from_frame->data[plane];
> + to = out->data[plane];
> +
> + if (MODE_INTERLEAVE_BOTTOM == reinterlace_mode) {
> + from = from + from_frame->linesize[plane];
> + to = to + out->linesize[plane];
> + }
> +
> + from_step = 2 * from_frame->linesize[plane];
> + to_step = 2 * out->linesize[plane];
> +
> + // when i = lines - aka first line
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], 0, clip_max);
> + to += to_step;
> + from += from_step;
> +
> + int cvlfp = !!(reinterlace->flags & FLAG_CVLPF);
> + if (cvlfp) {
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], 0, clip_max);
> + to += to_step;
> + from += from_step;
> + }
> +
> + for (i = lines - 2 - 2 * cvlfp; i; i--) {
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> + }
> +
> + // when i == 1 - aka last line
> + reinterlace->lowpass_line(to, cols, from, 0,
> -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> +
> + if (cvlfp) {
> + reinterlace->lowpass_line(to, cols, from, 0,
> -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> + }
> +
> +
> + lines = (MODE_INTERLEAVE_BOTTOM == reinterlace_mode) ? ((2 *
> out->height / y) + 1) / 2 : (2 * out->height / y + 0) / 2;
> + cols = out->width / x;
> + from_frame = second;
> + from = from_frame->data[plane];
> + to = out->data[plane];
> +
> + if (MODE_INTERLEAVE_TOP == reinterlace_mode) {
> + from = from + from_frame->linesize[plane];
> + to = to + out->linesize[plane];
> + }
> +
> + from_step = 2 * from_frame->linesize[plane];
> + to_step = 2 * out->linesize[plane];
> +
> + // when i = lines
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], 0, clip_max);
> + to += to_step;
> + from += from_step;
> +
> + if (cvlfp) {
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], 0, clip_max);
> + to += to_step;
> + from += from_step;
> + }
> +
> +
> + for (i = lines - 2 - 2 * cvlfp; i; i--) {
> + reinterlace->lowpass_line(to, cols, from,
> from_frame->linesize[plane], -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> + }
> +
> + // when i == 1
> + reinterlace->lowpass_line(to, cols, from, 0,
> -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> +
> + if (cvlfp) {
> + reinterlace->lowpass_line(to, cols, from, 0,
> -from_frame->linesize[plane], clip_max);
> + to += to_step;
> + from += from_step;
> + }
>
This whole INTERLEAVE code block looks confusing. In tinterlace this part
is much easier to understand.
Maybe it´s okay just to port it or at least do something more close. But
someone more experienced should give the direction.
+
> + } else {
> + offset1 = (MODE_INTERLEAVE_TOP == reinterlace_mode) ? 0 :
> out->linesize[plane];
> + offset2 = (MODE_INTERLEAVE_TOP == reinterlace_mode) ? 0 :
> first->linesize[plane];
> + offset3 = (MODE_INTERLEAVE_TOP == reinterlace_mode) ?
> out->linesize[plane] : 0;
> + offset4 = (MODE_INTERLEAVE_TOP == reinterlace_mode) ?
> second->linesize[plane] : 0;
>
Please reverse the reinterlace_mode comparisons (reinterlace_mode ==
MODE_INTERLEAVE_TOP).
Here and everywhere else in this patch.
+
> + av_image_copy_plane(out->data[plane] + offset1, 2 *
> out->linesize[plane],
> + first->data[plane] + offset2, 2 * first->linesize[plane],
> + first->width / x, first->height / y);
> + av_image_copy_plane(out->data[plane] + offset3, 2 *
> out->linesize[plane],
> + second->data[plane] + offset4, 2 *
> second->linesize[plane],
> + second->width / x, second->height / y);
> + }
> + break;
> +
> + case MODE_INTERLACE_X2:
> + y = y * 2;
> +
> + offset1 = 0; offset2 = 0;
> + offset3 = out->linesize[plane];
> + offset4 = second->linesize[plane];
> +
> + if (second->interlaced_frame && second->top_field_first) {
> + offset1 = out->linesize[plane];
> + offset2 = first->linesize[plane];
> + offset3 = 0; offset4 = 0;
> + }
> +
> + av_image_copy_plane(out->data[plane] + offset1, 2 *
> out->linesize[plane],
> + first->data[plane] + offset2, 2 * first->linesize[plane],
> + first->width / x, first->height / y);
> + av_image_copy_plane(out->data[plane] + offset3, 2 *
> out->linesize[plane],
> + second->data[plane] + offset4, 2 * second->linesize[plane],
> + second->width / x, second->height / y);
> + break;
> +
> + case MODE_MERGE_X2:
> + if (IS_ODD(reinterlace->current_frame_index - 1)) {
> + av_image_copy_plane(out->data[plane], 2 *
> out->linesize[plane],
> + second->data[plane], second->linesize[plane],
> second->width / x, second->height / y);
> + av_image_copy_plane(out->data[plane] + out->linesize[plane],
> 2 * out->linesize[plane],
> + first->data[plane], first->linesize[plane], first->width
> / x, first->height / y);
> + } else {
> + av_image_copy_plane(out->data[plane], 2 *
> out->linesize[plane],
> + first->data[plane], first->linesize[plane], first->width
> / x, first->height / y);
> + av_image_copy_plane(out->data[plane] + out->linesize[plane],
> 2 * out->linesize[plane],
> + second->data[plane], second->linesize[plane],
> second->width / x, second->height / y);
> + }
> + break;
> +
> + case MODE_MERGE_TFF:
> + case MODE_MERGE_BFF:
> + offset1 = (MODE_MERGE_TFF == reinterlace_mode) ? 0 :
> out->linesize[plane];
> + offset2 = (MODE_MERGE_TFF == reinterlace_mode) ?
> out->linesize[plane] : 0;
> +
> + av_image_copy_plane(out->data[plane] + offset1, 2 *
> out->linesize[plane],
> + first->data[plane], first->linesize[plane], first->width / x,
> first->height / y);
> + av_image_copy_plane(out->data[plane] + offset2, 2 *
> out->linesize[plane],
> + second->data[plane], second->linesize[plane], second->width /
> x, second->height / y);
> + break;
> +
> + default:
> + break;
> + }
> +
> + return 0;
> +}
> +
> +static ReInterlaceThreadData *get_ReInterlaceThreadData(AVFrame *out,
> AVFrame *first, AVFrame *second,
> + int plane, ReInterlaceContext *reinterlace,
> + int scale_w_plane12_factor,
> + int scale_h_plane12_factor)
> +{
> + ReInterlaceThreadData *rtd = &((ReInterlaceThreadData
> *)reinterlace->thread_data)[plane];
> +
> + if (!rtd)
> + return rtd;
> +
> + rtd->out = out;
> + rtd->first = first;
> + rtd->second = second;
> + rtd->plane = plane;
> + rtd->reinterlace = reinterlace;
> + rtd->scale_h_plane12_factor = scale_h_plane12_factor;
> + rtd->scale_w_plane12_factor = scale_w_plane12_factor;
> +
> + return rtd;
> +}
> +
> +static void copy_all_planes(AVFilterContext *ctx,
> + ReInterlaceContext *reinterlace,
> + const AVPixFmtDescriptor *desc,
> + AVFrame *out, AVFrame *first, AVFrame *second)
> +{
> + int scale_w_plane12_factor = 1 << desc->log2_chroma_w;
> + int scale_h_plane12_factor = 1 << desc->log2_chroma_h;
> + int plane;
> +
> + for (plane = 0; plane < desc->nb_components; plane++) {
> +
> + ReInterlaceThreadData *rtd = get_ReInterlaceThreadData(out,
> first, second,
> + plane, reinterlace, scale_w_plane12_factor,
> scale_h_plane12_factor);
> +
> + //ctx->internal->execute(ctx, filter_frame_plane, rtd, NULL,
> FFMIN(desc->nb_components, ctx->graph->nb_threads));
> + filter_frame_plane(ctx, rtd, plane, desc->nb_components);
> + }
> +}
> +
> +
> +
> +static int filter_frame(AVFilterLink *inlink, AVFrame *in)
> +{
> + AVFilterContext *ctx = inlink->dst;
> + ReInterlaceContext *reinterlace = ctx->priv;
> + AVFilterLink *outlink = ctx->outputs[0];
> + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(outlink->f
> ormat);
> + AVFrame *out, *first, *second;
> + int ret;
> +
> + int reinterlace_mode = reinterlace->mode;
> +
> + av_frame_free(&(reinterlace->prev_frame));
> + reinterlace->prev_frame = reinterlace->current_frame;
> + reinterlace->current_frame = in;
> + reinterlace->current_frame_index++;
> +
> + // we process two frames at a time, thus only even frame indexes are
> considered
> + if (IS_ODD(reinterlace->current_frame_index)) {
> + if (MODE_PAD == reinterlace_mode || MODE_MERGE_X2 ==
> reinterlace_mode
> + || MODE_INTERLACE_X2 == reinterlace_mode || MODE_MERGE_BFF ==
> reinterlace_mode
> + || MODE_MERGE_TFF == reinterlace_mode) {
> + // continue
> + } else {
> + return 0;
> + }
> + }
> +
> + first = reinterlace->prev_frame;
> + second = reinterlace->current_frame;
> +
> + switch (reinterlace_mode) {
> + case MODE_DROP_EVEN:
> + case MODE_DROP_ODD:
> + out = (reinterlace_mode == MODE_DROP_ODD) ?
> reinterlace->current_frame : reinterlace->prev_frame;
> + out = av_frame_clone(out);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + //out->pts = out->pts >> 1;
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> + break;
> +
> + case MODE_MERGE:
> + case MODE_MERGE_X2:
> + case MODE_MERGE_TFF:
> + case MODE_MERGE_BFF:
> + if (MODE_MERGE_X2 == reinterlace_mode && 1 ==
> reinterlace->current_frame_index)
> + return 0;
> +
> + if (MODE_MERGE_BFF == reinterlace_mode || MODE_MERGE_TFF ==
> reinterlace_mode) {
> + if (!first)
> + return 0;
> +
> + if (reinterlace->skip_next_frame) {
> + reinterlace->skip_next_frame = 0;
> + return 0;
> + }
> +
> + if (1 == first->interlaced_frame && 1 ==
> second->interlaced_frame)
> + {
> + if (first->top_field_first == second->top_field_first)
> + return 0;
> + else if (MODE_MERGE_BFF == reinterlace->mode &&
> first->top_field_first != 0)
> + return 0;
> + else if (MODE_MERGE_TFF == reinterlace->mode &&
> first->top_field_first != 1)
> + return 0;
> + }
> + }
> +
> + out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + av_frame_copy_props(out, first);
> + out->sample_aspect_ratio = av_mul_q(first->sample_aspect_ratio,
> av_make_q(2, 1));
> + out->interlaced_frame = 1;
> + out->top_field_first = MODE_MERGE_BFF == reinterlace_mode
> ? 0 : 1;
> + out->height = outlink->h;
> +
> + //if (MODE_MERGE == reinterlace_mode)
> + // out->pts = out->pts >> 1;
> +
> + copy_all_planes(ctx, reinterlace, desc, out, first, second);
> +
> + if (MODE_MERGE_BFF == reinterlace_mode || MODE_MERGE_TFF ==
> reinterlace_mode)
> + reinterlace->skip_next_frame = 1;
> +
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> + break;
> +
> + case MODE_PAD:
> + out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + av_frame_copy_props(out, second);
> + out->sample_aspect_ratio = av_mul_q(second->sample_aspect_ratio,
> av_make_q(2, 1));
> + out->height = outlink->h;
> +
> + copy_all_planes(ctx, reinterlace, desc, out, first, second);
> +
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> + break;
> +
> + case MODE_INTERLEAVE_BOTTOM:
> + case MODE_INTERLEAVE_TOP:
> + out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + av_frame_copy_props(out, first);
> +
> + copy_all_planes(ctx, reinterlace, desc, out, first, second);
> +
> + //out->pts = out->pts >> 1;
> + out->interlaced_frame = 1;
> + out->top_field_first = (MODE_INTERLEAVE_TOP == reinterlace_mode)
> ? 1 : 0;
> +
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> + break;
> +
> + case MODE_INTERLACE_X2:
> + if (1 == reinterlace->current_frame_index)
> + return 0;
> +
> + out = av_frame_clone(first);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + // output first frame
> + out->pts = (AV_NOPTS_VALUE != first->pts ) ? first->pts * 2 :
> AV_NOPTS_VALUE;
> + out->interlaced_frame = 1;
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> +
> + if (ret < 0)
> + return ret;
> +
> + // output the second frame interlaced with first frame
> + out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
> +
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + av_frame_copy_props(out, second);
> + out->interlaced_frame = 1;
> + out->top_field_first = !out->top_field_first;
> + out->pts = first->pts + second->pts;
> + out->pts = (AV_NOPTS_VALUE == first->pts || AV_NOPTS_VALUE ==
> second->pts) ? AV_NOPTS_VALUE : out->pts;
> +
> + copy_all_planes(ctx, reinterlace, desc, out, first, second);
> +
> + out->pts = av_rescale_q(out->pts, reinterlace->preout_time_base,
> outlink->time_base);
> + ret = ff_filter_frame(outlink, out);
> + break;
> +
> + default:
> + av_assert0(0);
> + }
> +
> +
> +
> + return ret;
> +}
> +
> +static av_cold void uninit(AVFilterContext *ctx)
> +{
> + ReInterlaceContext *reinterlace = ctx->priv;
> + int i;
> +
> + for (i = 0; i < 4; i++)
> + if (reinterlace->black_vec[i])
> + av_free(reinterlace->black_vec[i]);
> +
> + av_free(reinterlace->thread_data);
> +
> +}
> +
> +static const AVFilterPad reinterlace_inputs[] = {
> + {
> + .name = "default",
> + .type = AVMEDIA_TYPE_VIDEO,
> + .filter_frame = filter_frame,
> + },
> + { NULL }
> +};
> +
> +static const AVFilterPad reinterlace_outputs[] = {
> + {
> + .name = "default",
> + .type = AVMEDIA_TYPE_VIDEO,
> + .config_props = config_out_props,
> + },
> + { NULL }
> +};
> +
> +AVFilter ff_vf_reinterlace = {
> + .name = "reinterlace",
> + .description = NULL_IF_CONFIG_SMALL("Various interlace frame
> manipulations"),
> + .priv_size = sizeof(ReInterlaceContext),
> + .init = init,
> + .uninit = uninit,
> + .query_formats = query_formats,
> + .inputs = reinterlace_inputs,
> + .outputs = reinterlace_outputs,
> + .priv_class = &reinterlace_class,
> + .flags = AVFILTER_FLAG_SLICE_THREADS |
> AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC,
> +};
> \ No newline at end of file
> diff --git a/libavfilter/x86/Makefile b/libavfilter/x86/Makefile
> index 4d4c5e5..f8b9256 100644
> --- a/libavfilter/x86/Makefile
> +++ b/libavfilter/x86/Makefile
> @@ -16,6 +16,7 @@ OBJS-$(CONFIG_NOISE_FILTER) +=
> x86/vf_noise.o
> OBJS-$(CONFIG_PP7_FILTER) += x86/vf_pp7_init.o
> OBJS-$(CONFIG_PSNR_FILTER) += x86/vf_psnr_init.o
> OBJS-$(CONFIG_PULLUP_FILTER) += x86/vf_pullup_init.o
> +OBJS-$(CONFIG_REINTERLACE_FILTER) += x86/vf_reinterlace_init.o
> OBJS-$(CONFIG_REMOVEGRAIN_FILTER) += x86/vf_removegrain_init.o
> OBJS-$(CONFIG_SHOWCQT_FILTER) += x86/avf_showcqt_init.o
> OBJS-$(CONFIG_SPP_FILTER) += x86/vf_spp.o
> diff --git a/libavfilter/x86/vf_reinterlace_init.c
> b/libavfilter/x86/vf_reinterlace_init.c
> new file mode 100644
> index 0000000..5abbf1f
> --- /dev/null
> +++ b/libavfilter/x86/vf_reinterlace_init.c
> @@ -0,0 +1,101 @@
> +/*
> + * Copyright (C) 2014 Kieran Kunhya <kierank at obe.tv>
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> + * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin Street
> <https://maps.google.com/?q=51+Franklin+Street&entry=gmail&source=g>,
> Fifth Floor, Boston, MA 02110-1301 USA.
> + */
> +
> +#include "libavutil/attributes.h"
> +#include "libavutil/cpu.h"
> +#include "libavutil/internal.h"
> +#include "libavutil/mem.h"
> +#include "libavutil/x86/asm.h"
> +#include "libavutil/x86/cpu.h"
> +
> +#include "libavfilter/reinterlace.h"
> +
> +#if CONFIG_GPL
> +
> +void ff_lowpass_line_sse2(uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +void ff_lowpass_line_avx (uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +void ff_lowpass_line_avx2 (uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +
> +void ff_lowpass_line_16_sse2(uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +void ff_lowpass_line_16_avx (uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +void ff_lowpass_line_16_avx2 (uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +
> +void ff_lowpass_line_complex_sse2(uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +
> +void ff_lowpass_line_complex_12_sse2(uint8_t *dstp, ptrdiff_t linesize,
> + const uint8_t *srcp, ptrdiff_t mref,
> + ptrdiff_t pref, int clip_max);
> +
> +av_cold void ff_reinterlace_init_x86(ReInterlaceContext *reinterlace)
> +{
> + int cpu_flags = av_get_cpu_flags();
> +
> + if (reinterlace->bit_depth > 8) {
> + if (EXTERNAL_SSE2(cpu_flags)) {
> + if (!(reinterlace->flags & FLAG_CVLPF))
> + reinterlace->lowpass_line = ff_lowpass_line_16_sse2;
> + else
> + reinterlace->lowpass_line = ff_lowpass_line_complex_12_sse
> 2;
> + }
> + if (EXTERNAL_AVX(cpu_flags))
> + if (!(reinterlace->flags & FLAG_CVLPF))
> + reinterlace->lowpass_line = ff_lowpass_line_16_avx;
> + if (EXTERNAL_AVX2_FAST(cpu_flags)) {
> + if (!(reinterlace->flags & FLAG_CVLPF)) {
> + reinterlace->lowpass_line = ff_lowpass_line_16_avx2;
> + }
> + }
> + } else {
> + if (EXTERNAL_SSE2(cpu_flags)) {
> + if (!(reinterlace->flags & FLAG_CVLPF))
> + reinterlace->lowpass_line = ff_lowpass_line_sse2;
> + else
> + reinterlace->lowpass_line = ff_lowpass_line_complex_sse2;
> + }
> + if (EXTERNAL_AVX(cpu_flags))
> + if (!(reinterlace->flags & FLAG_CVLPF))
> + reinterlace->lowpass_line = ff_lowpass_line_avx;
> + if (EXTERNAL_AVX2_FAST(cpu_flags)) {
> + if (!(reinterlace->flags & FLAG_CVLPF)) {
> + reinterlace->lowpass_line = ff_lowpass_line_avx2;
> + }
> + }
> + }
> +}
> +
> +#elif
> +
> +av_cold void ff_reinterlace_init_x86(ReInterlaceContext *s) {}
> +
> +#endif
> \ No newline at end of file
> --
> 2.7.4
>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
More information about the ffmpeg-devel
mailing list