[FFmpeg-cvslog] lavfi: port IVTC filters from vapoursynth.
Clément Bœsch
git at videolan.org
Sun Apr 14 16:00:46 CEST 2013
ffmpeg | branch: master | Clément Bœsch <ubitux at gmail.com> | Tue Dec 11 00:53:10 2012 +0100| [7a92ec93c6507bd3dea4563ec7a0e3679034fc57] | committer: Clément Bœsch
lavfi: port IVTC filters from vapoursynth.
> http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=7a92ec93c6507bd3dea4563ec7a0e3679034fc57
---
Changelog | 1 +
doc/filters.texi | 363 ++++++++++++++++
libavfilter/Makefile | 2 +
libavfilter/allfilters.c | 2 +
libavfilter/version.h | 2 +-
libavfilter/vf_decimate.c | 398 +++++++++++++++++
libavfilter/vf_fieldmatch.c | 986 +++++++++++++++++++++++++++++++++++++++++++
7 files changed, 1753 insertions(+), 1 deletion(-)
diff --git a/Changelog b/Changelog
index d6fdbee..c5383ff 100644
--- a/Changelog
+++ b/Changelog
@@ -22,6 +22,7 @@ version <next>:
- telecine filter
- new interlace filter
- smptehdbars source
+- inverse telecine filters (fieldmatch and decimate)
version 1.2:
diff --git a/doc/filters.texi b/doc/filters.texi
index cd1ba7e..b4300fe 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -2383,6 +2383,46 @@ curves=vintage
@end example
@end itemize
+ at anchor{decimate}
+ at section decimate
+
+Drop duplicated frames at regular intervals.
+
+The filter accepts the following options:
+
+ at table @option
+ at item cycle
+Set the number of frames from which one will be dropped. Setting this to
+ at var{N} means one frame in every batch of @var{N} frames will be dropped.
+Default is @code{5}.
+
+ at item dupthresh
+Set the threshold for duplicate detection. If the difference metric for a frame
+is less than or equal to this value, then it is declared as duplicate. Default
+is @code{1.1}
+
+ at item scthresh
+Set scene change threshold. Default is @code{15}.
+
+ at item blockx
+ at item blocky
+Set the size of the x and y-axis blocks used during metric calculations.
+Larger blocks give better noise suppression, but also give worse detection of
+small movements. Must be a power of two. Default is @code{32}.
+
+ at item ppsrc
+Mark main input as a pre-processed input and activate clean source input
+stream. This allows the input to be pre-processed with various filters to help
+the metrics calculation while keeping the frame selection lossless. When set to
+ at code{1}, the first stream is for the pre-processed input, and the second
+stream is the clean source from where the kept frames are chosen. Default is
+ at code{0}.
+
+ at item chroma
+Set whether or not chroma is considered in the metric calculations. Default is
+ at code{1}.
+ at end table
+
@section mpdecimate
Drop frames that do not differ greatly from the previous frame in
@@ -3047,6 +3087,328 @@ Specify whether to extract the top (if the value is @code{0} or
@code{bottom}).
@end table
+ at section fieldmatch
+
+Field matching filter for inverse telecine. It is meant to reconstruct the
+progressive frames from a telecined stream. The filter does not drop duplicated
+frames, so to achieve a complete inverse telecine @code{fieldmatch} needs to be
+followed by a decimation filter such as @ref{decimate} in the filtergraph.
+
+The separation of the field matching and the decimation is notably motivated by
+the possibility of inserting a de-interlacing filter fallback between the two.
+If the source has mixed telecined and real interlaced content,
+ at code{fieldmatch} will not be able to match fields for the interlaced parts.
+But these remaining combed frames will be marked as interlaced, and thus can be
+de-interlaced by a later filter such as @ref{yadif} before decimation.
+
+In addition to the various configuration options, @code{fieldmatch} can take an
+optional second stream, activated through the @option{ppsrc} option. If
+enabled, the frames reconstruction will be based on the fields and frames from
+this second stream. This allows the first input to be pre-processed in order to
+help the various algorithms of the filter, while keeping the output lossless
+(assuming the fields are matched properly). Typically, a field-aware denoiser,
+or brightness/contrast adjustments can help.
+
+Note that this filter uses the same algorithms as TIVTC/TFM (AviSynth project)
+and VIVTC/VFM (VapourSynth project). The later is a light clone of TFM from
+which @code{fieldmatch} is based on. While the semantic and usage are very
+close, some behaviour and options names can differ.
+
+The filter accepts the following options:
+
+ at table @option
+ at item order
+Specify the assumed field order of the input stream. Available values are:
+
+ at table @samp
+ at item auto
+Auto detect parity (use FFmpeg's internal parity value).
+ at item bff
+Assume bottom field first.
+ at item tff
+Assume top field first.
+ at end table
+
+Note that it is sometimes recommended not to trust the parity announced by the
+stream.
+
+Default value is @var{auto}.
+
+ at item mode
+Set the matching mode or strategy to use. @option{pc} mode is the safest in the
+sense that it wont risk creating jerkiness due to duplicate frames when
+possible, but if there are bad edits or blended fields it will end up
+outputting combed frames when a good match might actually exist. On the other
+hand, @option{pcn_ub} mode is the most risky in terms of creating jerkiness,
+but will almost always find a good frame if there is one. The other values are
+all somewhere in between @option{pc} and @option{pcn_ub} in terms of risking
+jerkiness and creating duplicate frames versus finding good matches in sections
+with bad edits, orphaned fields, blended fields, etc.
+
+More details about p/c/n/u/b are available in @ref{p/c/n/u/b meaning} section.
+
+Available values are:
+
+ at table @samp
+ at item pc
+2-way matching (p/c)
+ at item pc_n
+2-way matching, and trying 3rd match if still combed (p/c + n)
+ at item pc_u
+2-way matching, and trying 3rd match (same order) if still combed (p/c + u)
+ at item pc_n_ub
+2-way matching, trying 3rd match if still combed, and trying 4th/5th matches if
+still combed (p/c + n + u/b)
+ at item pcn
+3-way matching (p/c/n)
+ at item pcn_ub
+3-way matching, and trying 4th/5th matches if all 3 of the original matches are
+detected as combed (p/c/n + u/b)
+ at end table
+
+The parenthesis at the end indicate the matches that would be used for that
+mode assuming @option{order}=@var{tff} (and @option{field} on @var{auto} or
+ at var{top}).
+
+In terms of speed @option{pc} mode is by far the fastest and @option{pcn_ub} is
+the slowest.
+
+Default value is @var{pc_n}.
+
+ at item ppsrc
+Mark the main input stream as a pre-processed input, and enable the secondary
+input stream as the clean source to pick the fields from. See the filter
+introduction for more details. It is similar to the @option{clip2} feature from
+VFM/TFM.
+
+Default value is @code{0} (disabled).
+
+ at item field
+Set the field to match from. It is recommended to set this to the same value as
+ at option{order} unless you experience matching failures with that setting. In
+certain circumstances changing the field that is used to match from can have a
+large impact on matching performance. Available values are:
+
+ at table @samp
+ at item auto
+Automatic (same value as @option{order}).
+ at item bottom
+Match from the bottom field.
+ at item top
+Match from the top field.
+ at end table
+
+Default value is @var{auto}.
+
+ at item mchroma
+Set whether or not chroma is included during the match comparisons. In most
+cases it is recommended to leave this enabled. You should set this to @code{0}
+only if your clip has bad chroma problems such as heavy rainbowing or other
+artifacts. Setting this to @code{0} could also be used to speed things up at
+the cost of some accuracy.
+
+Default value is @code{1}.
+
+ at item y0
+ at item y1
+These define an exclusion band which excludes the lines between @option{y0} and
+ at option{y1} from being included in the field matching decision. An exclusion
+band can be used to ignore subtitles, a logo, or other things that may
+interfere with the matching. @option{y0} sets the starting scan line and
+ at option{y1} sets the ending line; all lines in between @option{y0} and
+ at option{y1} (including @option{y0} and @option{y1}) will be ignored. Setting
+ at option{y0} and @option{y1} to the same value will disable the feature.
+ at option{y0} and @option{y1} defaults to @code{0}.
+
+ at item scthresh
+Set the scene change detection threshold as a percentage of maximum change on
+the luma plane. Good values are in the @code{[8.0, 14.0]} range. Scene change
+detection is only relevant in case @option{combmatch}=@var{sc}. The range for
+ at option{scthresh} is @code{[0.0, 100.0]}.
+
+Default value is @code{12.0}.
+
+ at item combmatch
+When @option{combatch} is not @var{none}, @code{fieldmatch} will take into
+account the combed scores of matches when deciding what match to use as the
+final match. Available values are:
+
+ at table @samp
+ at item none
+No final matching based on combed scores.
+ at item sc
+Combed scores are only used when a scene change is detected.
+ at item full
+Use combed scores all the time.
+ at end table
+
+Default is @var{sc}.
+
+ at item combdbg
+Force @code{fieldmatch} to calculate the combed metrics for certain matches and
+print them. This setting is known as @option{micout} in TFM/VFM vocabulary.
+Available values are:
+
+ at table @samp
+ at item none
+No forced calculation.
+ at item pcn
+Force p/c/n calculations.
+ at item pcnub
+Force p/c/n/u/b calculations.
+ at end table
+
+Default value is @var{none}.
+
+ at item cthresh
+This is the area combing threshold used for combed frame detection. This
+essentially controls how "strong" or "visible" combing must be to be detected.
+Larger values mean combing must be more visible and smaller values mean combing
+can be less visible or strong and still be detected. Valid settings are from
+ at code{-1} (every pixel will be detected as combed) to @code{255} (no pixel will
+be detected as combed). This is basically a pixel difference value. A good
+range is @code{[8, 12]}.
+
+Default value is @code{9}.
+
+ at item chroma
+Sets whether or not chroma is considered in the combed frame decision. Only
+disable this if your source has chroma problems (rainbowing, etc.) that are
+causing problems for the combed frame detection with chroma enabled. Actually,
+using @option{chroma}=@var{0} is usually more reliable, except for the case
+where there is chroma only combing in the source.
+
+Default value is @code{0}.
+
+ at item blockx
+ at item blocky
+Respectively set the x-axis and y-axis size of the window used during combed
+frame detection. This has to do with the size of the area in which
+ at option{combpel} pixels are required to be detected as combed for a frame to be
+declared combed. See the @option{combpel} parameter description for more info.
+Possible values are any number that is a power of 2 starting at 4 and going up
+to 512.
+
+Default value is @code{16}.
+
+ at item combpel
+The number of combed pixels inside any of the @option{blocky} by
+ at option{blockx} size blocks on the frame for the frame to be detected as
+combed. While @option{cthresh} controls how "visible" the combing must be, this
+setting controls "how much" combing there must be in any localized area (a
+window defined by the @option{blockx} and @option{blocky} settings) on the
+frame. Minimum value is @code{0} and maximum is @code{blocky x blockx} (at
+which point no frames will ever be detected as combed). This setting is known
+as @option{MI} in TFM/VFM vocabulary.
+
+Default value is @code{80}.
+ at end table
+
+ at anchor{p/c/n/u/b meaning}
+ at subsection p/c/n/u/b meaning
+
+ at subsubsection p/c/n
+
+We assume the following telecined stream:
+
+ at example
+Top fields: 1 2 2 3 4
+Bottom fields: 1 2 3 4 4
+ at end example
+
+The numbers correspond to the progressive frame the fields relate to. Here, the
+first two frames are progressive, the 3rd and 4th are combed, and so on.
+
+When @code{fieldmatch} is configured to run a matching from bottom
+(@option{field}=@var{bottom}) this is how this input stream get transformed:
+
+ at example
+Input stream:
+ T 1 2 2 3 4
+ B 1 2 3 4 4 <-- matching reference
+
+Matches: c c n n c
+
+Output stream:
+ T 1 2 3 4 4
+ B 1 2 3 4 4
+ at end example
+
+As a result of the field matching, we can see that some frames get duplicated.
+To perform a complete inverse telecine, you need to rely on a decimation filter
+after this operation. See for instance the @ref{decimate} filter.
+
+The same operation now matching from top fields (@option{field}=@var{top})
+looks like this:
+
+ at example
+Input stream:
+ T 1 2 2 3 4 <-- matching reference
+ B 1 2 3 4 4
+
+Matches: c c p p c
+
+Output stream:
+ T 1 2 2 3 4
+ B 1 2 2 3 4
+ at end example
+
+In these examples, we can see what @var{p}, @var{c} and @var{n} mean;
+basically, they refer to the frame and field of the opposite parity:
+
+ at itemize
+ at item @var{p} matches the field of the opposite parity in the previous frame
+ at item @var{c} matches the field of the opposite parity in the current frame
+ at item @var{n} matches the field of the opposite parity in the next frame
+ at end itemize
+
+ at subsubsection u/b
+
+The @var{u} and @var{b} matching are a bit special in the sense that they match
+from the opposite parity flag. In the following examples, we assume that we are
+currently matching the 2nd frame (Top:2, bottom:2). According to the match, a
+'x' is placed above and below each matched fields.
+
+With bottom matching (@option{field}=@var{bottom}):
+ at example
+Match: c p n b u
+
+ x x x x x
+ Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
+ Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
+ x x x x x
+
+Output frames:
+ 2 1 2 2 2
+ 2 2 2 1 3
+ at end example
+
+With top matching (@option{field}=@var{top}):
+ at example
+Match: c p n b u
+
+ x x x x x
+ Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
+ Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
+ x x x x x
+
+Output frames:
+ 2 2 2 1 2
+ 2 1 3 2 2
+ at end example
+
+ at subsection Examples
+
+Simple IVTC of a top field first telecined stream:
+ at example
+fieldmatch=order=tff:combmatch=none, decimate
+ at end example
+
+Advanced IVTC, with fallback on @ref{yadif} for still combed frames:
+ at example
+fieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate
+ at end example
+
@section fieldorder
Transform the field order of the input video.
@@ -5670,6 +6032,7 @@ Flip the input video vertically.
ffmpeg -i in.avi -vf "vflip" out.avi
@end example
+ at anchor{yadif}
@section yadif
Deinterlace the input video ("yadif" means "yet another deinterlacing
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 783ff00..949972d 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -109,6 +109,7 @@ OBJS-$(CONFIG_COPY_FILTER) += vf_copy.o
OBJS-$(CONFIG_CROP_FILTER) += vf_crop.o
OBJS-$(CONFIG_CROPDETECT_FILTER) += vf_cropdetect.o
OBJS-$(CONFIG_CURVES_FILTER) += vf_curves.o
+OBJS-$(CONFIG_DECIMATE_FILTER) += vf_decimate.o
OBJS-$(CONFIG_DELOGO_FILTER) += vf_delogo.o
OBJS-$(CONFIG_DESHAKE_FILTER) += vf_deshake.o
OBJS-$(CONFIG_DRAWBOX_FILTER) += vf_drawbox.o
@@ -116,6 +117,7 @@ OBJS-$(CONFIG_DRAWTEXT_FILTER) += vf_drawtext.o
OBJS-$(CONFIG_EDGEDETECT_FILTER) += vf_edgedetect.o
OBJS-$(CONFIG_FADE_FILTER) += vf_fade.o
OBJS-$(CONFIG_FIELD_FILTER) += vf_field.o
+OBJS-$(CONFIG_FIELDMATCH_FILTER) += vf_fieldmatch.o
OBJS-$(CONFIG_FIELDORDER_FILTER) += vf_fieldorder.o
OBJS-$(CONFIG_FORMAT_FILTER) += vf_format.o
OBJS-$(CONFIG_FRAMESTEP_FILTER) += vf_framestep.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index bba036c..95bd270 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -107,6 +107,7 @@ void avfilter_register_all(void)
REGISTER_FILTER(CROP, crop, vf);
REGISTER_FILTER(CROPDETECT, cropdetect, vf);
REGISTER_FILTER(CURVES, curves, vf);
+ REGISTER_FILTER(DECIMATE, decimate, vf);
REGISTER_FILTER(DELOGO, delogo, vf);
REGISTER_FILTER(DESHAKE, deshake, vf);
REGISTER_FILTER(DRAWBOX, drawbox, vf);
@@ -114,6 +115,7 @@ void avfilter_register_all(void)
REGISTER_FILTER(EDGEDETECT, edgedetect, vf);
REGISTER_FILTER(FADE, fade, vf);
REGISTER_FILTER(FIELD, field, vf);
+ REGISTER_FILTER(FIELDMATCH, fieldmatch, vf);
REGISTER_FILTER(FIELDORDER, fieldorder, vf);
REGISTER_FILTER(FORMAT, format, vf);
REGISTER_FILTER(FPS, fps, vf);
diff --git a/libavfilter/version.h b/libavfilter/version.h
index 5185b19..6cf3b05 100644
--- a/libavfilter/version.h
+++ b/libavfilter/version.h
@@ -29,7 +29,7 @@
#include "libavutil/avutil.h"
#define LIBAVFILTER_VERSION_MAJOR 3
-#define LIBAVFILTER_VERSION_MINOR 55
+#define LIBAVFILTER_VERSION_MINOR 56
#define LIBAVFILTER_VERSION_MICRO 100
#define LIBAVFILTER_VERSION_INT AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR, \
diff --git a/libavfilter/vf_decimate.c b/libavfilter/vf_decimate.c
new file mode 100644
index 0000000..55dd5a8
--- /dev/null
+++ b/libavfilter/vf_decimate.c
@@ -0,0 +1,398 @@
+/*
+ * Copyright (c) 2012 Fredrik Mellbin
+ * Copyright (c) 2013 Clément Bœsch
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/timestamp.h"
+#include "avfilter.h"
+#include "internal.h"
+
+#define INPUT_MAIN 0
+#define INPUT_CLEANSRC 1
+
+struct qitem {
+ AVFrame *frame;
+ int64_t maxbdiff;
+ int64_t totdiff;
+};
+
+typedef struct {
+ const AVClass *class;
+ struct qitem *queue; ///< window of cycle frames and the associated data diff
+ int fid; ///< current frame id in the queue
+ int filled; ///< 1 if the queue is filled, 0 otherwise
+ AVFrame *last; ///< last frame from the previous queue
+ int64_t frame_count; ///< output frame counter
+ AVFrame **clean_src; ///< frame queue for the clean source
+ int got_frame[2]; ///< frame request flag for each input stream
+ double ts_unit; ///< timestamp units for the output frames
+ uint32_t eof; ///< bitmask for end of stream
+ int hsub, vsub; ///< chroma subsampling values
+ int depth;
+ int nxblocks, nyblocks;
+ int bdiffsize;
+ int64_t *bdiffs;
+
+ /* options */
+ int cycle;
+ double dupthresh_flt;
+ double scthresh_flt;
+ int64_t dupthresh;
+ int64_t scthresh;
+ int blockx, blocky;
+ int ppsrc;
+ int chroma;
+} DecimateContext;
+
+#define OFFSET(x) offsetof(DecimateContext, x)
+#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
+
+static const AVOption decimate_options[] = {
+ { "cycle", "set the number of frame from which one will be dropped", OFFSET(cycle), AV_OPT_TYPE_INT, {.i64 = 5}, 2, 25, FLAGS },
+ { "dupthresh", "set duplicate threshold", OFFSET(dupthresh_flt), AV_OPT_TYPE_DOUBLE, {.dbl = 1.1}, 0, 100, FLAGS },
+ { "scthresh", "set scene change threshold", OFFSET(scthresh_flt), AV_OPT_TYPE_DOUBLE, {.dbl = 15.0}, 0, 100, FLAGS },
+ { "blockx", "set the size of the x-axis blocks used during metric calculations", OFFSET(blockx), AV_OPT_TYPE_INT, {.i64 = 32}, 4, 1<<9, FLAGS },
+ { "blocky", "set the size of the y-axis blocks used during metric calculations", OFFSET(blocky), AV_OPT_TYPE_INT, {.i64 = 32}, 4, 1<<9, FLAGS },
+ { "ppsrc", "mark main input as a pre-processed input and activate clean source input stream", OFFSET(ppsrc), AV_OPT_TYPE_INT, {.i64=0}, 0, 1, FLAGS },
+ { "chroma", "set whether or not chroma is considered in the metric calculations", OFFSET(chroma), AV_OPT_TYPE_INT, {.i64=1}, 0, 1, FLAGS },
+ { NULL }
+};
+
+AVFILTER_DEFINE_CLASS(decimate);
+
+static void calc_diffs(const DecimateContext *dm, struct qitem *q,
+ const AVFrame *f1, const AVFrame *f2)
+{
+ int64_t maxdiff = -1;
+ int64_t *bdiffs = dm->bdiffs;
+ int plane, i, j;
+
+ memset(bdiffs, 0, dm->bdiffsize * sizeof(*bdiffs));
+
+ for (plane = 0; plane < (dm->chroma ? 3 : 1); plane++) {
+ int x, y, xl;
+ const int linesize1 = f1->linesize[plane];
+ const int linesize2 = f2->linesize[plane];
+ const uint8_t *f1p = f1->data[plane];
+ const uint8_t *f2p = f2->data[plane];
+ int width = plane ? f1->width >> dm->hsub : f1->width;
+ int height = plane ? f1->height >> dm->vsub : f1->height;
+ int hblockx = dm->blockx / 2;
+ int hblocky = dm->blocky / 2;
+
+ if (plane) {
+ hblockx >>= dm->hsub;
+ hblocky >>= dm->vsub;
+ }
+
+ for (y = 0; y < height; y++) {
+ int ydest = y / hblocky;
+ int xdest = 0;
+
+#define CALC_DIFF(nbits) do { \
+ for (x = 0; x < width; x += hblockx) { \
+ int64_t acc = 0; \
+ int m = FFMIN(width, x + hblockx); \
+ for (xl = x; xl < m; xl++) \
+ acc += abs(((const uint##nbits##_t *)f1p)[xl] - \
+ ((const uint##nbits##_t *)f2p)[xl]); \
+ bdiffs[ydest * dm->nxblocks + xdest] += acc; \
+ xdest++; \
+ } \
+} while (0)
+ if (dm->depth == 8) CALC_DIFF(8);
+ else CALC_DIFF(16);
+
+ f1p += linesize1;
+ f2p += linesize2;
+ }
+ }
+
+ for (i = 0; i < dm->nyblocks - 1; i++) {
+ for (j = 0; j < dm->nxblocks - 1; j++) {
+ int64_t tmp = bdiffs[ i * dm->nxblocks + j ]
+ + bdiffs[ i * dm->nxblocks + j + 1]
+ + bdiffs[(i + 1) * dm->nxblocks + j ]
+ + bdiffs[(i + 1) * dm->nxblocks + j + 1];
+ if (tmp > maxdiff)
+ maxdiff = tmp;
+ }
+ }
+
+ q->totdiff = 0;
+ for (i = 0; i < dm->bdiffsize; i++)
+ q->totdiff += bdiffs[i];
+ q->maxbdiff = maxdiff;
+}
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+ int scpos = -1, duppos = -1;
+ int drop = INT_MIN, i, lowest = 0, ret;
+ AVFilterContext *ctx = inlink->dst;
+ AVFilterLink *outlink = ctx->outputs[0];
+ DecimateContext *dm = ctx->priv;
+ AVFrame *prv;
+
+ /* update frames queue(s) */
+ if (FF_INLINK_IDX(inlink) == INPUT_MAIN) {
+ dm->queue[dm->fid].frame = in;
+ dm->got_frame[INPUT_MAIN] = 1;
+ } else {
+ dm->clean_src[dm->fid] = in;
+ dm->got_frame[INPUT_CLEANSRC] = 1;
+ }
+ if (!dm->got_frame[INPUT_MAIN] || (dm->ppsrc && !dm->got_frame[INPUT_CLEANSRC]))
+ return 0;
+ dm->got_frame[INPUT_MAIN] = dm->got_frame[INPUT_CLEANSRC] = 0;
+
+ if (in) {
+ /* update frame metrics */
+ prv = dm->fid ? dm->queue[dm->fid - 1].frame : dm->last;
+ if (!prv)
+ prv = in;
+ calc_diffs(dm, &dm->queue[dm->fid], prv, in);
+ if (++dm->fid != dm->cycle)
+ return 0;
+ av_frame_free(&dm->last);
+ dm->last = av_frame_clone(in);
+ dm->fid = 0;
+
+ /* we have a complete cycle, select the frame to drop */
+ lowest = 0;
+ for (i = 0; i < dm->cycle; i++) {
+ if (dm->queue[i].totdiff > dm->scthresh)
+ scpos = i;
+ if (dm->queue[i].maxbdiff < dm->queue[lowest].maxbdiff)
+ lowest = i;
+ }
+ if (dm->queue[lowest].maxbdiff < dm->dupthresh)
+ duppos = lowest;
+ drop = scpos >= 0 && duppos < 0 ? scpos : lowest;
+ }
+
+ /* metrics debug */
+ if (av_log_get_level() >= AV_LOG_DEBUG) {
+ av_log(ctx, AV_LOG_DEBUG, "1/%d frame drop:\n", dm->cycle);
+ for (i = 0; i < dm->cycle && dm->queue[i].frame; i++) {
+ av_log(ctx, AV_LOG_DEBUG," #%d: totdiff=%08"PRIx64" maxbdiff=%08"PRIx64"%s%s%s%s\n",
+ i + 1, dm->queue[i].totdiff, dm->queue[i].maxbdiff,
+ i == scpos ? " sc" : "",
+ i == duppos ? " dup" : "",
+ i == lowest ? " lowest" : "",
+ i == drop ? " [DROP]" : "");
+ }
+ }
+
+ /* push all frames except the drop */
+ ret = 0;
+ for (i = 0; i < dm->cycle && dm->queue[i].frame; i++) {
+ if (i == drop) {
+ if (dm->ppsrc)
+ av_frame_free(&dm->clean_src[i]);
+ av_frame_free(&dm->queue[i].frame);
+ } else {
+ AVFrame *frame = dm->queue[i].frame;
+ if (dm->ppsrc) {
+ av_frame_free(&frame);
+ frame = dm->clean_src[i];
+ }
+ frame->pts = dm->frame_count++ * dm->ts_unit;
+ ret = ff_filter_frame(outlink, frame);
+ if (ret < 0)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static int config_input(AVFilterLink *inlink)
+{
+ int max_value;
+ AVFilterContext *ctx = inlink->dst;
+ DecimateContext *dm = ctx->priv;
+ const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(inlink->format);
+ const int w = inlink->w;
+ const int h = inlink->h;
+
+ dm->hsub = pix_desc->log2_chroma_w;
+ dm->vsub = pix_desc->log2_chroma_h;
+ dm->depth = pix_desc->comp[0].depth_minus1 + 1;
+ max_value = (1 << dm->depth) - 1;
+ dm->scthresh = (int64_t)(((int64_t)max_value * w * h * dm->scthresh_flt) / 100);
+ dm->dupthresh = (int64_t)(((int64_t)max_value * dm->blockx * dm->blocky * dm->dupthresh_flt) / 100);
+ dm->nxblocks = (w + dm->blockx/2 - 1) / (dm->blockx/2);
+ dm->nyblocks = (h + dm->blocky/2 - 1) / (dm->blocky/2);
+ dm->bdiffsize = dm->nxblocks * dm->nyblocks;
+ dm->bdiffs = av_malloc(dm->bdiffsize * sizeof(*dm->bdiffs));
+ dm->queue = av_calloc(dm->cycle, sizeof(*dm->queue));
+
+ if (!dm->bdiffs || !dm->queue)
+ return AVERROR(ENOMEM);
+
+ if (dm->ppsrc) {
+ dm->clean_src = av_calloc(dm->cycle, sizeof(*dm->clean_src));
+ if (!dm->clean_src)
+ return AVERROR(ENOMEM);
+ }
+
+ return 0;
+}
+
+static av_cold int decimate_init(AVFilterContext *ctx)
+{
+ const DecimateContext *dm = ctx->priv;
+ AVFilterPad pad = {
+ .name = av_strdup("main"),
+ .type = AVMEDIA_TYPE_VIDEO,
+ .filter_frame = filter_frame,
+ .config_props = config_input,
+ };
+
+ if (!pad.name)
+ return AVERROR(ENOMEM);
+ ff_insert_inpad(ctx, INPUT_MAIN, &pad);
+
+ if (dm->ppsrc) {
+ pad.name = av_strdup("clean_src");
+ pad.config_props = NULL;
+ if (!pad.name)
+ return AVERROR(ENOMEM);
+ ff_insert_inpad(ctx, INPUT_CLEANSRC, &pad);
+ }
+
+ if ((dm->blockx & (dm->blockx - 1)) ||
+ (dm->blocky & (dm->blocky - 1))) {
+ av_log(ctx, AV_LOG_ERROR, "blockx and blocky settings must be power of two\n");
+ return AVERROR(EINVAL);
+ }
+
+ return 0;
+}
+
+static av_cold void decimate_uninit(AVFilterContext *ctx)
+{
+ int i;
+ DecimateContext *dm = ctx->priv;
+
+ av_frame_free(&dm->last);
+ av_freep(&dm->bdiffs);
+ av_freep(&dm->queue);
+ av_freep(&dm->clean_src);
+ for (i = 0; i < ctx->nb_inputs; i++)
+ av_freep(&ctx->input_pads[i].name);
+}
+
+static int request_inlink(AVFilterContext *ctx, int lid)
+{
+ int ret = 0;
+ DecimateContext *dm = ctx->priv;
+
+ if (!dm->got_frame[lid]) {
+ AVFilterLink *inlink = ctx->inputs[lid];
+ ret = ff_request_frame(inlink);
+ if (ret == AVERROR_EOF) { // flushing
+ dm->eof |= 1 << lid;
+ ret = filter_frame(inlink, NULL);
+ }
+ }
+ return ret;
+}
+
+static int request_frame(AVFilterLink *outlink)
+{
+ int ret;
+ AVFilterContext *ctx = outlink->src;
+ DecimateContext *dm = ctx->priv;
+ const uint32_t eof_mask = 1<<INPUT_MAIN | dm->ppsrc<<INPUT_CLEANSRC;
+
+ if ((dm->eof & eof_mask) == eof_mask) // flush done?
+ return AVERROR_EOF;
+ if ((ret = request_inlink(ctx, INPUT_MAIN)) < 0)
+ return ret;
+ if (dm->ppsrc && (ret = request_inlink(ctx, INPUT_CLEANSRC)) < 0)
+ return ret;
+ return 0;
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ static const enum AVPixelFormat pix_fmts[] = {
+#define PF_NOALPHA(suf) AV_PIX_FMT_YUV420##suf, AV_PIX_FMT_YUV422##suf, AV_PIX_FMT_YUV444##suf
+#define PF_ALPHA(suf) AV_PIX_FMT_YUVA420##suf, AV_PIX_FMT_YUVA422##suf, AV_PIX_FMT_YUVA444##suf
+#define PF(suf) PF_NOALPHA(suf), PF_ALPHA(suf)
+ PF(P), PF(P9), PF(P10), PF_NOALPHA(P12), PF_NOALPHA(P14), PF(P16),
+ AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P,
+ AV_PIX_FMT_GRAY8,
+ AV_PIX_FMT_NONE
+ };
+ ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
+ return 0;
+}
+
+static int config_output(AVFilterLink *outlink)
+{
+ AVFilterContext *ctx = outlink->src;
+ DecimateContext *dm = ctx->priv;
+ const AVFilterLink *inlink =
+ ctx->inputs[dm->ppsrc ? INPUT_CLEANSRC : INPUT_MAIN];
+ AVRational fps = inlink->frame_rate;
+
+ if (!fps.num || !fps.den) {
+ av_log(ctx, AV_LOG_ERROR, "The input needs a constant frame rate; "
+ "current rate of %d/%d is invalid\n", fps.num, fps.den);
+ return AVERROR(EINVAL);
+ }
+ fps = av_mul_q(fps, (AVRational){dm->cycle - 1, dm->cycle});
+ av_log(ctx, AV_LOG_VERBOSE, "FPS: %d/%d -> %d/%d\n",
+ inlink->frame_rate.num, inlink->frame_rate.den, fps.num, fps.den);
+ outlink->flags |= FF_LINK_FLAG_REQUEST_LOOP;
+ outlink->time_base = inlink->time_base;
+ outlink->frame_rate = fps;
+ outlink->sample_aspect_ratio = inlink->sample_aspect_ratio;
+ outlink->w = inlink->w;
+ outlink->h = inlink->h;
+ dm->ts_unit = av_q2d(av_inv_q(av_mul_q(fps, outlink->time_base)));
+ return 0;
+}
+
+static const AVFilterPad decimate_outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .request_frame = request_frame,
+ .config_props = config_output,
+ },
+ { NULL }
+};
+
+AVFilter avfilter_vf_decimate = {
+ .name = "decimate",
+ .description = NULL_IF_CONFIG_SMALL("Decimate frames (post field matching filter)."),
+ .init = decimate_init,
+ .uninit = decimate_uninit,
+ .priv_size = sizeof(DecimateContext),
+ .query_formats = query_formats,
+ .outputs = decimate_outputs,
+ .priv_class = &decimate_class,
+ .flags = AVFILTER_FLAG_DYNAMIC_INPUTS,
+};
diff --git a/libavfilter/vf_fieldmatch.c b/libavfilter/vf_fieldmatch.c
new file mode 100644
index 0000000..507b5d3
--- /dev/null
+++ b/libavfilter/vf_fieldmatch.c
@@ -0,0 +1,986 @@
+/*
+ * Copyright (c) 2012 Fredrik Mellbin
+ * Copyright (c) 2013 Clément Bœsch
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * Fieldmatching filter, ported from VFM filter (VapouSsynth) by Clément.
+ * Fredrik Mellbin is the author of the VIVTC/VFM filter, which is itself a
+ * light clone of the TIVTC/TFM (AviSynth) filter written by Kevin Stone
+ * (tritical), the original author.
+ *
+ * @see http://bengal.missouri.edu/~kes25c/
+ * @see http://www.vapoursynth.com/about/
+ */
+
+#include <inttypes.h>
+
+#include "libavutil/avassert.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/opt.h"
+#include "libavutil/timestamp.h"
+#include "avfilter.h"
+#include "internal.h"
+
+#define INPUT_MAIN 0
+#define INPUT_CLEANSRC 1
+
+enum fieldmatch_parity {
+ FM_PARITY_AUTO = -1,
+ FM_PARITY_BOTTOM = 0,
+ FM_PARITY_TOP = 1,
+};
+
+enum matching_mode {
+ MODE_PC,
+ MODE_PC_N,
+ MODE_PC_U,
+ MODE_PC_N_UB,
+ MODE_PCN,
+ MODE_PCN_UB,
+ NB_MODE
+};
+
+enum comb_matching_mode {
+ COMBMATCH_NONE,
+ COMBMATCH_SC,
+ COMBMATCH_FULL,
+ NB_COMBMATCH
+};
+
+enum comb_dbg {
+ COMBDBG_NONE,
+ COMBDBG_PCN,
+ COMBDBG_PCNUB,
+ NB_COMBDBG
+};
+
+typedef struct {
+ const AVClass *class;
+
+ AVFrame *prv, *src, *nxt; ///< main sliding window of 3 frames
+ AVFrame *prv2, *src2, *nxt2; ///< sliding window of the optional second stream
+ int64_t frame_count; ///< output frame counter
+ int got_frame[2]; ///< frame request flag for each input stream
+ int hsub, vsub; ///< chroma subsampling values
+ uint32_t eof; ///< bitmask for end of stream
+ int64_t lastscdiff;
+ int64_t lastn;
+
+ /* options */
+ int order;
+ int ppsrc;
+ enum matching_mode mode;
+ int field;
+ int mchroma;
+ int y0, y1;
+ int64_t scthresh;
+ double scthresh_flt;
+ enum comb_matching_mode combmatch;
+ int combdbg;
+ int cthresh;
+ int chroma;
+ int blockx, blocky;
+ int combpel;
+
+ /* misc buffers */
+ uint8_t *map_data[4];
+ int map_linesize[4];
+ uint8_t *cmask_data[4];
+ int cmask_linesize[4];
+ int *c_array;
+ int tpitchy, tpitchuv;
+ uint8_t *tbuffer;
+} FieldMatchContext;
+
+#define OFFSET(x) offsetof(FieldMatchContext, x)
+#define FLAGS AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
+
+static const AVOption fieldmatch_options[] = {
+ { "order", "specify the assumed field order", OFFSET(order), AV_OPT_TYPE_INT, {.i64=FM_PARITY_AUTO}, -1, 1, FLAGS, "order" },
+ { "auto", "auto detect parity", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_AUTO}, INT_MIN, INT_MAX, FLAGS, "order" },
+ { "bff", "assume bottom field first", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_BOTTOM}, INT_MIN, INT_MAX, FLAGS, "order" },
+ { "tff", "assume top field first", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_TOP}, INT_MIN, INT_MAX, FLAGS, "order" },
+ { "mode", "set the matching mode or strategy to use", OFFSET(mode), AV_OPT_TYPE_INT, {.i64=MODE_PC_N}, MODE_PC, NB_MODE-1, FLAGS, "mode" },
+ { "pc", "2-way match (p/c)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PC}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "pc_n", "2-way match + 3rd match on combed (p/c + u)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PC_N}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "pc_u", "2-way match + 3rd match (same order) on combed (p/c + u)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PC_U}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "pc_n_ub", "2-way match + 3rd match on combed + 4th/5th matches if still combed (p/c + u + u/b)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PC_N_UB}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "pcn", "3-way match (p/c/n)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PCN}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "pcn_ub", "3-way match + 4th/5th matches on combed (p/c/n + u/b)", 0, AV_OPT_TYPE_CONST, {.i64=MODE_PCN_UB}, INT_MIN, INT_MAX, FLAGS, "mode" },
+ { "ppsrc", "mark main input as a pre-processed input and activate clean source input stream", OFFSET(ppsrc), AV_OPT_TYPE_INT, {.i64=0}, 0, 1, FLAGS },
+ { "field", "set the field to match from", OFFSET(field), AV_OPT_TYPE_INT, {.i64=FM_PARITY_AUTO}, -1, 1, FLAGS, "field" },
+ { "auto", "automatic (same value as 'order')", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_AUTO}, INT_MIN, INT_MAX, FLAGS, "field" },
+ { "bottom", "bottom field", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_BOTTOM}, INT_MIN, INT_MAX, FLAGS, "field" },
+ { "top", "top field", 0, AV_OPT_TYPE_CONST, {.i64=FM_PARITY_TOP}, INT_MIN, INT_MAX, FLAGS, "field" },
+ { "mchroma", "set whether or not chroma is included during the match comparisons", OFFSET(mchroma), AV_OPT_TYPE_INT, {.i64=1}, 0, 1, FLAGS },
+ { "y0", "define an exclusion band which excludes the lines between y0 and y1 from the field matching decision", OFFSET(y0), AV_OPT_TYPE_INT, {.i64=0}, 0, INT_MAX, FLAGS },
+ { "y1", "define an exclusion band which excludes the lines between y0 and y1 from the field matching decision", OFFSET(y1), AV_OPT_TYPE_INT, {.i64=0}, 0, INT_MAX, FLAGS },
+ { "scthresh", "set scene change detection threshold", OFFSET(scthresh_flt), AV_OPT_TYPE_DOUBLE, {.dbl=12}, 0, 100, FLAGS },
+ { "combmatch", "set combmatching mode", OFFSET(combmatch), AV_OPT_TYPE_INT, {.i64=COMBMATCH_SC}, COMBMATCH_NONE, NB_COMBMATCH-1, FLAGS, "combmatching" },
+ { "none", "disable combmatching", 0, AV_OPT_TYPE_CONST, {.i64=COMBMATCH_NONE}, INT_MIN, INT_MAX, FLAGS, "combmatching" },
+ { "sc", "enable combmatching only on scene change", 0, AV_OPT_TYPE_CONST, {.i64=COMBMATCH_SC}, INT_MIN, INT_MAX, FLAGS, "combmatching" },
+ { "full", "enable combmatching all the time", 0, AV_OPT_TYPE_CONST, {.i64=COMBMATCH_FULL}, INT_MIN, INT_MAX, FLAGS, "combmatching" },
+ { "combdbg", "enable comb debug", OFFSET(combdbg), AV_OPT_TYPE_INT, {.i64=COMBDBG_NONE}, COMBDBG_NONE, NB_COMBDBG-1, FLAGS, "dbglvl" },
+ { "none", "no forced calculation", 0, AV_OPT_TYPE_CONST, {.i64=COMBDBG_NONE}, INT_MIN, INT_MAX, FLAGS, "dbglvl" },
+ { "pcn", "calculate p/c/n", 0, AV_OPT_TYPE_CONST, {.i64=COMBDBG_PCN}, INT_MIN, INT_MAX, FLAGS, "dbglvl" },
+ { "pcnub", "calculate p/c/n/u/b", 0, AV_OPT_TYPE_CONST, {.i64=COMBDBG_PCNUB}, INT_MIN, INT_MAX, FLAGS, "dbglvl" },
+ { "cthresh", "set the area combing threshold used for combed frame detection", OFFSET(cthresh), AV_OPT_TYPE_INT, {.i64= 9}, -1, 0xff, FLAGS },
+ { "chroma", "set whether or not chroma is considered in the combed frame decision", OFFSET(chroma), AV_OPT_TYPE_INT, {.i64= 0}, 0, 1, FLAGS },
+ { "blockx", "set the x-axis size of the window used during combed frame detection", OFFSET(blockx), AV_OPT_TYPE_INT, {.i64=16}, 4, 1<<9, FLAGS },
+ { "blocky", "set the y-axis size of the window used during combed frame detection", OFFSET(blocky), AV_OPT_TYPE_INT, {.i64=16}, 4, 1<<9, FLAGS },
+ { "combpel", "set the number of combed pixels inside any of the blocky by blockx size blocks on the frame for the frame to be detected as combed", OFFSET(combpel), AV_OPT_TYPE_INT, {.i64=80}, 0, INT_MAX, FLAGS },
+ { NULL }
+};
+
+AVFILTER_DEFINE_CLASS(fieldmatch);
+
+static int get_width(const FieldMatchContext *fm, const AVFrame *f, int plane)
+{
+ return plane ? f->width >> fm->hsub : f->width;
+}
+
+static int get_height(const FieldMatchContext *fm, const AVFrame *f, int plane)
+{
+ return plane ? f->height >> fm->vsub : f->height;
+}
+
+static int64_t luma_abs_diff(const AVFrame *f1, const AVFrame *f2)
+{
+ int x, y;
+ const uint8_t *srcp1 = f1->data[0];
+ const uint8_t *srcp2 = f2->data[0];
+ const int src1_linesize = f1->linesize[0];
+ const int src2_linesize = f2->linesize[0];
+ const int width = f1->width;
+ const int height = f1->height;
+ int64_t acc = 0;
+
+ for (y = 0; y < height; y++) {
+ for (x = 0; x < width; x++)
+ acc += abs(srcp1[x] - srcp2[x]);
+ srcp1 += src1_linesize;
+ srcp2 += src2_linesize;
+ }
+ return acc;
+}
+
+static void fill_buf(uint8_t *data, int w, int h, int linesize, uint8_t v)
+{
+ int y;
+
+ for (y = 0; y < h; y++) {
+ memset(data, v, w);
+ data += linesize;
+ }
+}
+
+static int calc_combed_score(const FieldMatchContext *fm, const AVFrame *src)
+{
+ int x, y, plane, max_v = 0;
+ const int cthresh = fm->cthresh;
+ const int cthresh6 = cthresh * 6;
+
+ for (plane = 0; plane < (fm->chroma ? 3 : 1); plane++) {
+ const uint8_t *srcp = src->data[plane];
+ const int src_linesize = src->linesize[plane];
+ const int width = get_width (fm, src, plane);
+ const int height = get_height(fm, src, plane);
+ uint8_t *cmkp = fm->cmask_data[plane];
+ const int cmk_linesize = fm->cmask_linesize[plane];
+
+ if (cthresh < 0) {
+ fill_buf(cmkp, width, height, cmk_linesize, 0xff);
+ continue;
+ }
+ fill_buf(cmkp, width, height, cmk_linesize, 0);
+
+ /* [1 -3 4 -3 1] vertical filter */
+#define FILTER(xm2, xm1, xp1, xp2) \
+ abs( 4 * srcp[x] \
+ -3 * (srcp[x + (xm1)*src_linesize] + srcp[x + (xp1)*src_linesize]) \
+ + (srcp[x + (xm2)*src_linesize] + srcp[x + (xp2)*src_linesize])) > cthresh6
+
+ /* first line */
+ for (x = 0; x < width; x++) {
+ const int s1 = abs(srcp[x] - srcp[x + src_linesize]);
+ if (s1 > cthresh && FILTER(2, 1, 1, 2))
+ cmkp[x] = 0xff;
+ }
+ srcp += src_linesize;
+ cmkp += cmk_linesize;
+
+ /* second line */
+ for (x = 0; x < width; x++) {
+ const int s1 = abs(srcp[x] - srcp[x - src_linesize]);
+ const int s2 = abs(srcp[x] - srcp[x + src_linesize]);
+ if (s1 > cthresh && s2 > cthresh && FILTER(2, -1, 1, 2))
+ cmkp[x] = 0xff;
+ }
+ srcp += src_linesize;
+ cmkp += cmk_linesize;
+
+ /* all lines minus first two and last two */
+ for (y = 2; y < height-2; y++) {
+ for (x = 0; x < width; x++) {
+ const int s1 = abs(srcp[x] - srcp[x - src_linesize]);
+ const int s2 = abs(srcp[x] - srcp[x + src_linesize]);
+ if (s1 > cthresh && s2 > cthresh && FILTER(-2, -1, 1, 2))
+ cmkp[x] = 0xff;
+ }
+ srcp += src_linesize;
+ cmkp += cmk_linesize;
+ }
+
+ /* before-last line */
+ for (x = 0; x < width; x++) {
+ const int s1 = abs(srcp[x] - srcp[x - src_linesize]);
+ const int s2 = abs(srcp[x] - srcp[x + src_linesize]);
+ if (s1 > cthresh && s2 > cthresh && FILTER(-2, -1, 1, -2))
+ cmkp[x] = 0xff;
+ }
+ srcp += src_linesize;
+ cmkp += cmk_linesize;
+
+ /* last line */
+ for (x = 0; x < width; x++) {
+ const int s1 = abs(srcp[x] - srcp[x - src_linesize]);
+ if (s1 > cthresh && FILTER(-2, -1, -1, -2))
+ cmkp[x] = 0xff;
+ }
+ }
+
+ if (fm->chroma) {
+ uint8_t *cmkp = fm->cmask_data[0];
+ uint8_t *cmkpU = fm->cmask_data[1];
+ uint8_t *cmkpV = fm->cmask_data[2];
+ const int width = src->width >> fm->hsub;
+ const int height = src->height >> fm->vsub;
+ const int cmk_linesize = fm->cmask_linesize[0] << 1;
+ const int cmk_linesizeUV = fm->cmask_linesize[2];
+ uint8_t *cmkpp = cmkp - (cmk_linesize>>1);
+ uint8_t *cmkpn = cmkp + (cmk_linesize>>1);
+ uint8_t *cmkpnn = cmkp + cmk_linesize;
+ for (y = 1; y < height - 1; y++) {
+ cmkpp += cmk_linesize;
+ cmkp += cmk_linesize;
+ cmkpn += cmk_linesize;
+ cmkpnn += cmk_linesize;
+ cmkpV += cmk_linesizeUV;
+ cmkpU += cmk_linesizeUV;
+ for (x = 1; x < width - 1; x++) {
+#define HAS_FF_AROUND(p, lz) (p[x-1 - lz] == 0xff || p[x - lz] == 0xff || p[x+1 - lz] == 0xff || \
+ p[x-1 ] == 0xff || p[x+1 ] == 0xff || \
+ p[x-1 + lz] == 0xff || p[x + lz] == 0xff || p[x+1 + lz] == 0xff)
+ if ((cmkpV[x] == 0xff && HAS_FF_AROUND(cmkpV, cmk_linesizeUV)) ||
+ (cmkpU[x] == 0xff && HAS_FF_AROUND(cmkpU, cmk_linesizeUV))) {
+ ((uint16_t*)cmkp)[x] = 0xffff;
+ ((uint16_t*)cmkpn)[x] = 0xffff;
+ if (y&1) ((uint16_t*)cmkpp)[x] = 0xffff;
+ else ((uint16_t*)cmkpnn)[x] = 0xffff;
+ }
+ }
+ }
+ }
+
+ {
+ const int blockx = fm->blockx;
+ const int blocky = fm->blocky;
+ const int xhalf = blockx/2;
+ const int yhalf = blocky/2;
+ const int cmk_linesize = fm->cmask_linesize[0];
+ const uint8_t *cmkp = fm->cmask_data[0] + cmk_linesize;
+ const int width = src->width;
+ const int height = src->height;
+ const int xblocks = ((width+xhalf)/blockx) + 1;
+ const int xblocks4 = xblocks<<2;
+ const int yblocks = ((height+yhalf)/blocky) + 1;
+ int *c_array = fm->c_array;
+ const int arraysize = (xblocks*yblocks)<<2;
+ int heighta = (height/(blocky/2))*(blocky/2);
+ const int widtha = (width /(blockx/2))*(blockx/2);
+ if (heighta == height)
+ heighta = height - yhalf;
+ memset(c_array, 0, arraysize * sizeof(*c_array));
+
+#define C_ARRAY_ADD(v) do { \
+ const int box1 = (x / blockx) * 4; \
+ const int box2 = ((x + xhalf) / blockx) * 4; \
+ c_array[temp1 + box1 ] += v; \
+ c_array[temp1 + box2 + 1] += v; \
+ c_array[temp2 + box1 + 2] += v; \
+ c_array[temp2 + box2 + 3] += v; \
+} while (0)
+
+#define VERTICAL_HALF(y_start, y_end) do { \
+ for (y = y_start; y < y_end; y++) { \
+ const int temp1 = (y / blocky) * xblocks4; \
+ const int temp2 = ((y + yhalf) / blocky) * xblocks4; \
+ for (x = 0; x < width; x++) \
+ if (cmkp[x - cmk_linesize] == 0xff && \
+ cmkp[x ] == 0xff && \
+ cmkp[x + cmk_linesize] == 0xff) \
+ C_ARRAY_ADD(1); \
+ cmkp += cmk_linesize; \
+ } \
+} while (0)
+
+ VERTICAL_HALF(1, yhalf);
+
+ for (y = yhalf; y < heighta; y += yhalf) {
+ const int temp1 = (y / blocky) * xblocks4;
+ const int temp2 = ((y + yhalf) / blocky) * xblocks4;
+
+ for (x = 0; x < widtha; x += xhalf) {
+ const uint8_t *cmkp_tmp = cmkp + x;
+ int u, v, sum = 0;
+ for (u = 0; u < yhalf; u++) {
+ for (v = 0; v < xhalf; v++)
+ if (cmkp_tmp[v - cmk_linesize] == 0xff &&
+ cmkp_tmp[v ] == 0xff &&
+ cmkp_tmp[v + cmk_linesize] == 0xff)
+ sum++;
+ cmkp_tmp += cmk_linesize;
+ }
+ if (sum)
+ C_ARRAY_ADD(sum);
+ }
+
+ for (x = widtha; x < width; x++) {
+ const uint8_t *cmkp_tmp = cmkp + x;
+ int u, sum = 0;
+ for (u = 0; u < yhalf; u++) {
+ if (cmkp_tmp[-cmk_linesize] == 0xff &&
+ cmkp_tmp[ 0] == 0xff &&
+ cmkp_tmp[ cmk_linesize] == 0xff)
+ sum++;
+ cmkp_tmp += cmk_linesize;
+ }
+ if (sum)
+ C_ARRAY_ADD(sum);
+ }
+
+ cmkp += cmk_linesize * yhalf;
+ }
+
+ VERTICAL_HALF(heighta, height - 1);
+
+ for (x = 0; x < arraysize; x++)
+ if (c_array[x] > max_v)
+ max_v = c_array[x];
+ }
+ return max_v;
+}
+
+// the secret is that tbuffer is an interlaced, offset subset of all the lines
+static void build_abs_diff_mask(const uint8_t *prvp, int prv_linesize,
+ const uint8_t *nxtp, int nxt_linesize,
+ uint8_t *tbuffer, int tbuf_linesize,
+ int width, int height)
+{
+ int y, x;
+
+ prvp -= prv_linesize;
+ nxtp -= nxt_linesize;
+ for (y = 0; y < height; y++) {
+ for (x = 0; x < width; x++)
+ tbuffer[x] = FFABS(prvp[x] - nxtp[x]);
+ prvp += prv_linesize;
+ nxtp += nxt_linesize;
+ tbuffer += tbuf_linesize;
+ }
+}
+
+/**
+ * Build a map over which pixels differ a lot/a little
+ */
+static void build_diff_map(FieldMatchContext *fm,
+ const uint8_t *prvp, int prv_linesize,
+ const uint8_t *nxtp, int nxt_linesize,
+ uint8_t *dstp, int dst_linesize, int height,
+ int width, int plane)
+{
+ int x, y, u, diff, count;
+ int tpitch = plane ? fm->tpitchuv : fm->tpitchy;
+ const uint8_t *dp = fm->tbuffer + tpitch;
+
+ build_abs_diff_mask(prvp, prv_linesize, nxtp, nxt_linesize,
+ fm->tbuffer, tpitch, width, height>>1);
+
+ for (y = 2; y < height - 2; y += 2) {
+ for (x = 1; x < width - 1; x++) {
+ diff = dp[x];
+ if (diff > 3) {
+ for (count = 0, u = x-1; u < x+2 && count < 2; u++) {
+ count += dp[u-tpitch] > 3;
+ count += dp[u ] > 3;
+ count += dp[u+tpitch] > 3;
+ }
+ if (count > 1) {
+ dstp[x] = 1;
+ if (diff > 19) {
+ int upper = 0, lower = 0;
+ for (count = 0, u = x-1; u < x+2 && count < 6; u++) {
+ if (dp[u-tpitch] > 19) { count++; upper = 1; }
+ if (dp[u ] > 19) count++;
+ if (dp[u+tpitch] > 19) { count++; lower = 1; }
+ }
+ if (count > 3) {
+ if (upper && lower) {
+ dstp[x] |= 1<<1;
+ } else {
+ int upper2 = 0, lower2 = 0;
+ for (u = FFMAX(x-4,0); u < FFMIN(x+5,width); u++) {
+ if (y != 2 && dp[u-2*tpitch] > 19) upper2 = 1;
+ if ( dp[u- tpitch] > 19) upper = 1;
+ if ( dp[u+ tpitch] > 19) lower = 1;
+ if (y != height-4 && dp[u+2*tpitch] > 19) lower2 = 1;
+ }
+ if ((upper && (lower || upper2)) ||
+ (lower && (upper || lower2)))
+ dstp[x] |= 1<<1;
+ else if (count > 5)
+ dstp[x] |= 1<<2;
+ }
+ }
+ }
+ }
+ }
+ }
+ dp += tpitch;
+ dstp += dst_linesize;
+ }
+}
+
+enum { mP, mC, mN, mB, mU };
+
+static int get_field_base(int match, int field)
+{
+ return match < 3 ? 2 - field : 1 + field;
+}
+
+static AVFrame *select_frame(FieldMatchContext *fm, int match)
+{
+ if (match == mP || match == mB) return fm->prv;
+ else if (match == mN || match == mU) return fm->nxt;
+ else /* match == mC */ return fm->src;
+}
+
+static int compare_fields(FieldMatchContext *fm, int match1, int match2, int field)
+{
+ int plane, ret;
+ uint64_t accumPc = 0, accumPm = 0, accumPml = 0;
+ uint64_t accumNc = 0, accumNm = 0, accumNml = 0;
+ int norm1, norm2, mtn1, mtn2;
+ float c1, c2, mr;
+ const AVFrame *src = fm->src;
+
+ for (plane = 0; plane < (fm->mchroma ? 3 : 1); plane++) {
+ int x, y, temp1, temp2, fbase;
+ const AVFrame *prev, *next;
+ uint8_t *mapp = fm->map_data[plane];
+ int map_linesize = fm->map_linesize[plane];
+ const uint8_t *srcp = src->data[plane];
+ const int src_linesize = src->linesize[plane];
+ const int srcf_linesize = src_linesize << 1;
+ int prv_linesize, nxt_linesize;
+ int prvf_linesize, nxtf_linesize;
+ const int width = get_width (fm, src, plane);
+ const int height = get_height(fm, src, plane);
+ const int y0a = fm->y0 >> (plane != 0);
+ const int y1a = fm->y1 >> (plane != 0);
+ const int startx = (plane == 0 ? 8 : 4);
+ const int stopx = width - startx;
+ const uint8_t *srcpf, *srcf, *srcnf;
+ const uint8_t *prvpf, *prvnf, *nxtpf, *nxtnf;
+
+ fill_buf(mapp, width, height, map_linesize, 0);
+
+ /* match1 */
+ fbase = get_field_base(match1, field);
+ srcf = srcp + (fbase + 1) * src_linesize;
+ srcpf = srcf - srcf_linesize;
+ srcnf = srcf + srcf_linesize;
+ mapp = mapp + fbase * map_linesize;
+ prev = select_frame(fm, match1);
+ prv_linesize = prev->linesize[plane];
+ prvf_linesize = prv_linesize << 1;
+ prvpf = prev->data[plane] + fbase * prv_linesize; // previous frame, previous field
+ prvnf = prvpf + prvf_linesize; // previous frame, next field
+
+ /* match2 */
+ fbase = get_field_base(match2, field);
+ next = select_frame(fm, match2);
+ nxt_linesize = next->linesize[plane];
+ nxtf_linesize = nxt_linesize << 1;
+ nxtpf = next->data[plane] + fbase * nxt_linesize; // next frame, previous field
+ nxtnf = nxtpf + nxtf_linesize; // next frame, next field
+
+ map_linesize <<= 1;
+ if ((match1 >= 3 && field == 1) || (match1 < 3 && field != 1))
+ build_diff_map(fm, prvpf, prvf_linesize, nxtpf, nxtf_linesize,
+ mapp, map_linesize, height, width, plane);
+ else
+ build_diff_map(fm, prvnf, prvf_linesize, nxtnf, nxtf_linesize,
+ mapp + map_linesize, map_linesize, height, width, plane);
+
+ for (y = 2; y < height - 2; y += 2) {
+ if (y0a == y1a || y < y0a || y > y1a) {
+ for (x = startx; x < stopx; x++) {
+ if (mapp[x] > 0 || mapp[x + map_linesize] > 0) {
+ temp1 = srcpf[x] + (srcf[x] << 2) + srcnf[x]; // [1 4 1]
+
+ temp2 = abs(3 * (prvpf[x] + prvnf[x]) - temp1);
+ if (temp2 > 23 && ((mapp[x]&1) || (mapp[x + map_linesize]&1)))
+ accumPc += temp2;
+ if (temp2 > 42) {
+ if ((mapp[x]&2) || (mapp[x + map_linesize]&2))
+ accumPm += temp2;
+ if ((mapp[x]&4) || (mapp[x + map_linesize]&4))
+ accumPml += temp2;
+ }
+
+ temp2 = abs(3 * (nxtpf[x] + nxtnf[x]) - temp1);
+ if (temp2 > 23 && ((mapp[x]&1) || (mapp[x + map_linesize]&1)))
+ accumNc += temp2;
+ if (temp2 > 42) {
+ if ((mapp[x]&2) || (mapp[x + map_linesize]&2))
+ accumNm += temp2;
+ if ((mapp[x]&4) || (mapp[x + map_linesize]&4))
+ accumNml += temp2;
+ }
+ }
+ }
+ }
+ prvpf += prvf_linesize;
+ prvnf += prvf_linesize;
+ srcpf += srcf_linesize;
+ srcf += srcf_linesize;
+ srcnf += srcf_linesize;
+ nxtpf += nxtf_linesize;
+ nxtnf += nxtf_linesize;
+ mapp += map_linesize;
+ }
+ }
+
+ if (accumPm < 500 && accumNm < 500 && (accumPml >= 500 || accumNml >= 500) &&
+ FFMAX(accumPml,accumNml) > 3*FFMIN(accumPml,accumNml)) {
+ accumPm = accumPml;
+ accumNm = accumNml;
+ }
+
+ norm1 = (int)((accumPc / 6.0f) + 0.5f);
+ norm2 = (int)((accumNc / 6.0f) + 0.5f);
+ mtn1 = (int)((accumPm / 6.0f) + 0.5f);
+ mtn2 = (int)((accumNm / 6.0f) + 0.5f);
+ c1 = ((float)FFMAX(norm1,norm2)) / ((float)FFMAX(FFMIN(norm1,norm2),1));
+ c2 = ((float)FFMAX(mtn1, mtn2)) / ((float)FFMAX(FFMIN(mtn1, mtn2), 1));
+ mr = ((float)FFMAX(mtn1, mtn2)) / ((float)FFMAX(FFMAX(norm1,norm2),1));
+ if (((mtn1 >= 500 || mtn2 >= 500) && (mtn1*2 < mtn2*1 || mtn2*2 < mtn1*1)) ||
+ ((mtn1 >= 1000 || mtn2 >= 1000) && (mtn1*3 < mtn2*2 || mtn2*3 < mtn1*2)) ||
+ ((mtn1 >= 2000 || mtn2 >= 2000) && (mtn1*5 < mtn2*4 || mtn2*5 < mtn1*4)) ||
+ ((mtn1 >= 4000 || mtn2 >= 4000) && c2 > c1))
+ ret = mtn1 > mtn2 ? match2 : match1;
+ else if (mr > 0.005 && FFMAX(mtn1, mtn2) > 150 && (mtn1*2 < mtn2*1 || mtn2*2 < mtn1*1))
+ ret = mtn1 > mtn2 ? match2 : match1;
+ else
+ ret = norm1 > norm2 ? match2 : match1;
+ return ret;
+}
+
+static void copy_fields(const FieldMatchContext *fm, AVFrame *dst,
+ const AVFrame *src, int field)
+{
+ int plane;
+ for (plane = 0; plane < 4 && src->data[plane]; plane++)
+ av_image_copy_plane(dst->data[plane] + field*dst->linesize[plane], dst->linesize[plane] << 1,
+ src->data[plane] + field*src->linesize[plane], src->linesize[plane] << 1,
+ get_width(fm, src, plane), get_height(fm, src, plane) / 2);
+}
+
+static AVFrame *create_weave_frame(AVFilterContext *ctx, int match, int field,
+ const AVFrame *prv, AVFrame *src, const AVFrame *nxt)
+{
+ AVFrame *dst;
+ FieldMatchContext *fm = ctx->priv;
+
+ if (match == mC) {
+ dst = av_frame_clone(src);
+ } else {
+ AVFilterLink *outlink = ctx->outputs[0];
+
+ dst = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+ if (!dst)
+ return NULL;
+ av_frame_copy_props(dst, src);
+
+ switch (match) {
+ case mP: copy_fields(fm, dst, src, 1-field); copy_fields(fm, dst, prv, field); break;
+ case mN: copy_fields(fm, dst, src, 1-field); copy_fields(fm, dst, nxt, field); break;
+ case mB: copy_fields(fm, dst, src, field); copy_fields(fm, dst, prv, 1-field); break;
+ case mU: copy_fields(fm, dst, src, field); copy_fields(fm, dst, nxt, 1-field); break;
+ default: av_assert0(0);
+ }
+ }
+ return dst;
+}
+
+static int checkmm(AVFilterContext *ctx, int *combs, int m1, int m2,
+ AVFrame **gen_frames, int field)
+{
+ const FieldMatchContext *fm = ctx->priv;
+
+#define LOAD_COMB(mid) do { \
+ if (combs[mid] < 0) { \
+ if (!gen_frames[mid]) \
+ gen_frames[mid] = create_weave_frame(ctx, mid, field, \
+ fm->prv, fm->src, fm->nxt); \
+ combs[mid] = calc_combed_score(fm, gen_frames[mid]); \
+ } \
+} while (0)
+
+ LOAD_COMB(m1);
+ LOAD_COMB(m2);
+
+ if ((combs[m2] * 3 < combs[m1] || (combs[m2] * 2 < combs[m1] && combs[m1] > fm->combpel)) &&
+ abs(combs[m2] - combs[m1]) >= 30 && combs[m2] < fm->combpel)
+ return m2;
+ else
+ return m1;
+}
+
+static const int fxo0m[] = { mP, mC, mN, mB, mU };
+static const int fxo1m[] = { mN, mC, mP, mU, mB };
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+ AVFilterContext *ctx = inlink->dst;
+ AVFilterLink *outlink = ctx->outputs[0];
+ FieldMatchContext *fm = ctx->priv;
+ int combs[] = { -1, -1, -1, -1, -1 };
+ int order, field, i, match, sc = 0;
+ const int *fxo;
+ AVFrame *gen_frames[] = { NULL, NULL, NULL, NULL, NULL };
+ AVFrame *dst;
+
+ /* update frames queue(s) */
+#define SLIDING_FRAME_WINDOW(prv, src, nxt) do { \
+ if (prv != src) /* 2nd loop exception (1st has prv==src and we don't want to loose src) */ \
+ av_frame_free(&prv); \
+ prv = src; \
+ src = nxt; \
+ if (in) \
+ nxt = in; \
+ if (!prv) \
+ prv = src; \
+ if (!prv) /* received only one frame at that point */ \
+ return 0; \
+ av_assert0(prv && src && nxt); \
+} while (0)
+ if (FF_INLINK_IDX(inlink) == INPUT_MAIN) {
+ SLIDING_FRAME_WINDOW(fm->prv, fm->src, fm->nxt);
+ fm->got_frame[INPUT_MAIN] = 1;
+ } else {
+ SLIDING_FRAME_WINDOW(fm->prv2, fm->src2, fm->nxt2);
+ fm->got_frame[INPUT_CLEANSRC] = 1;
+ }
+ if (!fm->got_frame[INPUT_MAIN] || (fm->ppsrc && !fm->got_frame[INPUT_CLEANSRC]))
+ return 0;
+ fm->got_frame[INPUT_MAIN] = fm->got_frame[INPUT_CLEANSRC] = 0;
+ in = fm->src;
+
+ /* parity */
+ order = fm->order != FM_PARITY_AUTO ? fm->order : (in->interlaced_frame ? in->top_field_first : 1);
+ field = fm->field != FM_PARITY_AUTO ? fm->field : order;
+ av_assert0(order == 0 || order == 1 || field == 0 || field == 1);
+ fxo = field ^ order ? fxo1m : fxo0m;
+
+ /* debug mode: we generate all the fields combinations and their associated
+ * combed score. XXX: inject as frame metadata? */
+ if (fm->combdbg) {
+ for (i = 0; i < FF_ARRAY_ELEMS(combs); i++) {
+ if (i > mN && fm->combdbg == COMBDBG_PCN)
+ break;
+ gen_frames[i] = create_weave_frame(ctx, i, field, fm->prv, fm->src, fm->nxt);
+ if (!gen_frames[i])
+ return AVERROR(ENOMEM);
+ combs[i] = calc_combed_score(fm, gen_frames[i]);
+ }
+ av_log(ctx, AV_LOG_INFO, "COMBS: %3d %3d %3d %3d %3d\n",
+ combs[0], combs[1], combs[2], combs[3], combs[4]);
+ } else {
+ gen_frames[mC] = av_frame_clone(fm->src);
+ if (!gen_frames[mC])
+ return AVERROR(ENOMEM);
+ }
+
+ /* p/c selection and optional 3-way p/c/n matches */
+ match = compare_fields(fm, fxo[mC], fxo[mP], field);
+ if (fm->mode == MODE_PCN || fm->mode == MODE_PCN_UB)
+ match = compare_fields(fm, match, fxo[mN], field);
+
+ /* scene change check */
+ if (fm->combmatch == COMBMATCH_SC) {
+ if (fm->lastn == fm->frame_count - 1) {
+ if (fm->lastscdiff > fm->scthresh)
+ sc = 1;
+ } else if (luma_abs_diff(fm->prv, fm->src) > fm->scthresh) {
+ sc = 1;
+ }
+
+ if (!sc) {
+ fm->lastn = fm->frame_count;
+ fm->lastscdiff = luma_abs_diff(fm->src, fm->nxt);
+ sc = fm->lastscdiff > fm->scthresh;
+ }
+ }
+
+ if (fm->combmatch == COMBMATCH_FULL || (fm->combmatch == COMBMATCH_SC && sc)) {
+ switch (fm->mode) {
+ /* 2-way p/c matches */
+ case MODE_PC:
+ match = checkmm(ctx, combs, match, match == fxo[mP] ? fxo[mC] : fxo[mP], gen_frames, field);
+ break;
+ case MODE_PC_N:
+ match = checkmm(ctx, combs, match, fxo[mN], gen_frames, field);
+ break;
+ case MODE_PC_U:
+ match = checkmm(ctx, combs, match, fxo[mU], gen_frames, field);
+ break;
+ case MODE_PC_N_UB:
+ match = checkmm(ctx, combs, match, fxo[mN], gen_frames, field);
+ match = checkmm(ctx, combs, match, fxo[mU], gen_frames, field);
+ match = checkmm(ctx, combs, match, fxo[mB], gen_frames, field);
+ break;
+ /* 3-way p/c/n matches */
+ case MODE_PCN:
+ match = checkmm(ctx, combs, match, match == fxo[mP] ? fxo[mC] : fxo[mP], gen_frames, field);
+ break;
+ case MODE_PCN_UB:
+ match = checkmm(ctx, combs, match, fxo[mU], gen_frames, field);
+ match = checkmm(ctx, combs, match, fxo[mB], gen_frames, field);
+ break;
+ default:
+ av_assert0(0);
+ }
+ }
+
+ /* get output frame and drop the others */
+ if (fm->ppsrc) {
+ /* field matching was based on a filtered/post-processed input, we now
+ * pick the untouched fields from the clean source */
+ dst = create_weave_frame(ctx, match, field, fm->prv2, fm->src2, fm->nxt2);
+ } else {
+ if (!gen_frames[match]) { // XXX: is that possible?
+ dst = create_weave_frame(ctx, match, field, fm->prv, fm->src, fm->nxt);
+ } else {
+ dst = gen_frames[match];
+ gen_frames[match] = NULL;
+ }
+ }
+ if (!dst)
+ return AVERROR(ENOMEM);
+ for (i = 0; i < FF_ARRAY_ELEMS(gen_frames); i++)
+ av_frame_free(&gen_frames[i]);
+
+ /* mark the frame we are unable to match properly as interlaced so a proper
+ * de-interlacer can take the relay */
+ dst->interlaced_frame = combs[match] >= fm->combpel;
+ if (dst->interlaced_frame) {
+ av_log(ctx, AV_LOG_WARNING, "Frame #%"PRId64" at %s is still interlaced\n",
+ fm->frame_count, av_ts2timestr(in->pts, &inlink->time_base));
+ dst->top_field_first = field;
+ }
+ fm->frame_count++;
+
+ av_log(ctx, AV_LOG_DEBUG, "SC:%d | COMBS: %3d %3d %3d %3d %3d (combpel=%d)"
+ " match=%d combed=%s\n", sc, combs[0], combs[1], combs[2], combs[3], combs[4],
+ fm->combpel, match, dst->interlaced_frame ? "YES" : "NO");
+
+ return ff_filter_frame(outlink, dst);
+}
+
+static int request_inlink(AVFilterContext *ctx, int lid)
+{
+ int ret = 0;
+ FieldMatchContext *fm = ctx->priv;
+
+ if (!fm->got_frame[lid]) {
+ AVFilterLink *inlink = ctx->inputs[lid];
+ ret = ff_request_frame(inlink);
+ if (ret == AVERROR_EOF) { // flushing
+ fm->eof |= 1 << lid;
+ ret = filter_frame(inlink, NULL);
+ }
+ }
+ return ret;
+}
+
+static int request_frame(AVFilterLink *outlink)
+{
+ int ret;
+ AVFilterContext *ctx = outlink->src;
+ FieldMatchContext *fm = ctx->priv;
+ const uint32_t eof_mask = 1<<INPUT_MAIN | fm->ppsrc<<INPUT_CLEANSRC;
+
+ if ((fm->eof & eof_mask) == eof_mask) // flush done?
+ return AVERROR_EOF;
+ if ((ret = request_inlink(ctx, INPUT_MAIN)) < 0)
+ return ret;
+ if (fm->ppsrc && (ret = request_inlink(ctx, INPUT_CLEANSRC)) < 0)
+ return ret;
+ return 0;
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ // TODO: second input source can support >8bit depth
+ static const enum AVPixelFormat pix_fmts[] = {
+ AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV420P,
+ AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P,
+ AV_PIX_FMT_NONE
+ };
+ ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
+ return 0;
+}
+
+static int config_input(AVFilterLink *inlink)
+{
+ int ret;
+ AVFilterContext *ctx = inlink->dst;
+ FieldMatchContext *fm = ctx->priv;
+ const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(inlink->format);
+ const int w = inlink->w;
+ const int h = inlink->h;
+
+ fm->scthresh = (int64_t)((w * h * 255.0 * fm->scthresh_flt) / 100.0);
+
+ if ((ret = av_image_alloc(fm->map_data, fm->map_linesize, w, h, inlink->format, 32)) < 0 ||
+ (ret = av_image_alloc(fm->cmask_data, fm->cmask_linesize, w, h, inlink->format, 32)) < 0)
+ return ret;
+
+ fm->hsub = pix_desc->log2_chroma_w;
+ fm->vsub = pix_desc->log2_chroma_h;
+
+ fm->tpitchy = FFALIGN(w, 16);
+ fm->tpitchuv = FFALIGN(w >> 1, 16);
+
+ fm->tbuffer = av_malloc(h/2 * fm->tpitchy);
+ fm->c_array = av_malloc((((w + fm->blockx/2)/fm->blockx)+1) *
+ (((h + fm->blocky/2)/fm->blocky)+1) *
+ 4 * sizeof(*fm->c_array));
+ if (!fm->tbuffer || !fm->c_array)
+ return AVERROR(ENOMEM);
+
+ return 0;
+}
+
+static av_cold int fieldmatch_init(AVFilterContext *ctx)
+{
+ const FieldMatchContext *fm = ctx->priv;
+ AVFilterPad pad = {
+ .name = av_strdup("main"),
+ .type = AVMEDIA_TYPE_VIDEO,
+ .filter_frame = filter_frame,
+ .config_props = config_input,
+ };
+
+ if (!pad.name)
+ return AVERROR(ENOMEM);
+ ff_insert_inpad(ctx, INPUT_MAIN, &pad);
+
+ if (fm->ppsrc) {
+ pad.name = av_strdup("clean_src");
+ pad.config_props = NULL;
+ if (!pad.name)
+ return AVERROR(ENOMEM);
+ ff_insert_inpad(ctx, INPUT_CLEANSRC, &pad);
+ }
+
+ if ((fm->blockx & (fm->blockx - 1)) ||
+ (fm->blocky & (fm->blocky - 1))) {
+ av_log(ctx, AV_LOG_ERROR, "blockx and blocky settings must be power of two\n");
+ return AVERROR(EINVAL);
+ }
+
+ if (fm->combpel > fm->blockx * fm->blocky) {
+ av_log(ctx, AV_LOG_ERROR, "Combed pixel should not be larger than blockx x blocky\n");
+ return AVERROR(EINVAL);
+ }
+
+ return 0;
+}
+
+static av_cold void fieldmatch_uninit(AVFilterContext *ctx)
+{
+ int i;
+ FieldMatchContext *fm = ctx->priv;
+
+ if (fm->prv != fm->src)
+ av_frame_free(&fm->prv);
+ if (fm->nxt != fm->src)
+ av_frame_free(&fm->nxt);
+ av_frame_free(&fm->src);
+ av_freep(&fm->map_data[0]);
+ av_freep(&fm->cmask_data[0]);
+ av_freep(&fm->tbuffer);
+ av_freep(&fm->c_array);
+ for (i = 0; i < ctx->nb_inputs; i++)
+ av_freep(&ctx->input_pads[i].name);
+}
+
+static int config_output(AVFilterLink *outlink)
+{
+ AVFilterContext *ctx = outlink->src;
+ const FieldMatchContext *fm = ctx->priv;
+ const AVFilterLink *inlink =
+ ctx->inputs[fm->ppsrc ? INPUT_CLEANSRC : INPUT_MAIN];
+
+ outlink->flags |= FF_LINK_FLAG_REQUEST_LOOP;
+ outlink->time_base = inlink->time_base;
+ outlink->sample_aspect_ratio = inlink->sample_aspect_ratio;
+ outlink->frame_rate = inlink->frame_rate;
+ outlink->w = inlink->w;
+ outlink->h = inlink->h;
+ return 0;
+}
+
+static const AVFilterPad fieldmatch_outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .request_frame = request_frame,
+ .config_props = config_output,
+ },
+ { NULL }
+};
+
+AVFilter avfilter_vf_fieldmatch = {
+ .name = "fieldmatch",
+ .description = NULL_IF_CONFIG_SMALL("Field matching for inverse telecine"),
+ .query_formats = query_formats,
+ .priv_size = sizeof(FieldMatchContext),
+ .init = fieldmatch_init,
+ .uninit = fieldmatch_uninit,
+ .inputs = NULL,
+ .outputs = fieldmatch_outputs,
+ .priv_class = &fieldmatch_class,
+ .flags = AVFILTER_FLAG_DYNAMIC_INPUTS,
+};
More information about the ffmpeg-cvslog
mailing list