[FFmpeg-devel] [PATCH v2] Add lensfun filter
Stephen Seo
seo.disparate at gmail.com
Thu Jul 12 08:32:25 EEST 2018
Lensfun is a library that applies lens correction to an image using a
database of cameras/lenses (you provide the camera and lens models, and
it uses the corresponding database entry's parameters to apply lens
correction). It is licensed under LGPL3.
The lensfun filter utilizes the lensfun library to apply lens correction
to videos as well as images.
This filter was created out of necessity since I wanted to apply lens
correction to a video and the lenscorrection filter did not work for me.
While this filter requires little info from the user to apply lens
correction, the flaw is that lensfun is intended to be used on indvidual
images. When used on a video, the parameters such as focal length is
constant, so lens correction may fail on videos where the camera's focal
length changes (zooming in or out via zoom lens). To use this filter
correctly on videos where such parameters change, timeline editing may
be used since this filter supports it.
Note that valgrind shows a small memory leak which is not from this
filter but from the lensfun library (memory is allocated when loading
the lensfun database but it somehow isn't deallocated even during
cleanup; it is briefly created in the init function of the filter, and
destroyed before the init function returns). This may have been fixed by
the latest commit in the lensfun repository; the current latest release
of lensfun is almost 3 years ago.
Bi-Linear interpolation is used by default as lanczos interpolation
shows more artifacts in the corrected image in my tests.
The lanczos interpolation is derived from lenstool's implementation of
lanczos interpolation. Lenstool is an app within the lensfun repository
which is licensed under GPL3.
v2 of this patch fixes license notice in libavfilter/vf_lensfun.c
Signed-off-by: Stephen Seo <seo.disparate at gmail.com>
---
configure | 5 +
doc/filters.texi | 103 +++++++
libavfilter/Makefile | 1 +
libavfilter/allfilters.c | 1 +
libavfilter/vf_lensfun.c | 605 +++++++++++++++++++++++++++++++++++++++
5 files changed, 715 insertions(+)
create mode 100644 libavfilter/vf_lensfun.c
diff --git a/configure b/configure
index b1a4dcfc42..efc323d4c7 100755
--- a/configure
+++ b/configure
@@ -217,6 +217,7 @@ External library support:
--disable-iconv disable iconv [autodetect]
--enable-jni enable JNI support [no]
--enable-ladspa enable LADSPA audio filtering [no]
+ --enable-lensfun enable lensfun lens correction [no]
--enable-libaom enable AV1 video encoding/decoding via libaom [no]
--enable-libass enable libass subtitles rendering,
needed for subtitles and ass filter [no]
@@ -1656,6 +1657,7 @@ EXTERNAL_LIBRARY_NONFREE_LIST="
EXTERNAL_LIBRARY_VERSION3_LIST="
gmp
+ lensfun
libopencore_amrnb
libopencore_amrwb
libvmaf
@@ -3353,6 +3355,8 @@ hqdn3d_filter_deps="gpl"
interlace_filter_deps="gpl"
kerndeint_filter_deps="gpl"
ladspa_filter_deps="ladspa libdl"
+lensfun_filter_deps="lensfun"
+lensfun_src_filter_deps="lensfun"
lv2_filter_deps="lv2"
mcdeint_filter_deps="avcodec gpl"
movie_filter_deps="avcodec avformat"
@@ -5994,6 +5998,7 @@ enabled gmp && require gmp gmp.h mpz_export -lgmp
enabled gnutls && require_pkg_config gnutls gnutls gnutls/gnutls.h gnutls_global_init
enabled jni && { [ $target_os = "android" ] && check_header jni.h && enabled pthreads || die "ERROR: jni not found"; }
enabled ladspa && require_header ladspa.h
+enabled lensfun && require_pkg_config lensfun lensfun lensfun.h lf_db_new
enabled libaom && require_pkg_config libaom "aom >= 1.0.0" aom/aom_codec.h aom_codec_version
enabled lv2 && require_pkg_config lv2 lilv-0 "lilv/lilv.h" lilv_world_new
enabled libiec61883 && require libiec61883 libiec61883/iec61883.h iec61883_cmp_connect -lraw1394 -lavc1394 -lrom1394 -liec61883
diff --git a/doc/filters.texi b/doc/filters.texi
index d236bd69b7..528756c2af 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -10700,6 +10700,109 @@ The formula that generates the correction is:
where @var{r_0} is halve of the image diagonal and @var{r_src} and @var{r_tgt} are the
distances from the focal point in the source and target images, respectively.
+ at section lensfun
+
+Apply lens correction via the lensfun library (@url{http://lensfun.sourceforge.net/}).
+
+The @code{lensfun} filter requires the camera make, camera model, and lens model
+to apply the lens correction. The filter will load the lensfun database and
+query it to find the corresponding camera and lens entries in the database. As
+long as these entries can be found with the given options, the filter can
+perform corrections on frames. Note that incomplete strings will result in the
+filter choosing the best match with the given options, and the filter will
+output the chosen camera and lens models (logged with level "info"). You must
+provide the make, camera model, and lens model as they are required.
+
+The filter accepts the following options:
+
+ at table @option
+ at item make
+The make of the camera (for example, "Canon"). This option is required.
+ at item model
+The model of the camera (for example, "Canon EOS 100D"). This option is
+required.
+ at item lens_model
+The model of the lens (for example, "Canon EF-S 18-55mm f/3.5-5.6 IS STM"). This
+option is required.
+ at item mode
+The type of correction to apply. The following values are valid options:
+ at table @samp
+ at item vignetting
+Enables fixing lens vignetting.
+ at item geometry
+Enables fixing lens geometry. This is the default.
+ at item subpixel
+Enables fixing chromatic aberrations.
+ at item vig_geo
+Enables fixing lens vignetting and lens geometry.
+ at item vig_subpixel
+Enables fixing lens vignetting and chromatic aberrations.
+ at item distortion
+Enables fixing both lens geometry and chromatic aberrations.
+ at item all
+Enables all possible corrections.
+ at end table
+ at item focal_length
+The focal length of the image/video (zoom; expected constant for video). For
+example, a 18--55mm lens has focal length range of [18--55], so a value in that
+range should be chosen when using that lens. Default 18.
+ at item aperture
+The aperture of the image/video (expected constant for video). Note that
+aperture is only used for vignetting correction. Default 3.5.
+ at item focus_distance
+The focus distance of the image/video (expected constant for video). Note that
+focus distance is only used for vignetting and only slightly affects the
+vignetting correction process. If unknown, leave it at the default value (which
+is 1000).
+ at item target_geometry
+The target geometry of the output image/video. The following values are valid
+options:
+ at table @samp
+ at item rectilinear
+(default)
+ at item fisheye
+ at item panoramic
+ at item equirectangular
+ at item fisheye_orthographic
+ at item fisheye_stereographic
+ at item fisheye_equisolid
+ at item fisheye_thoby
+ at end table
+ at item reverse
+Apply the reverse of image correction (instead of correcting distortion, apply
+it).
+ at item interpolation
+The type of interpolation used when correcting distortion. The following values
+are valid options:
+ at table @samp
+ at item nearest
+ at item linear
+(default)
+ at item lanczos
+ at end table
+ at end table
+
+ at subsection Examples
+
+ at itemize
+ at item
+Apply lens correction with make "Canon", camera model "Canon EOS 100D", and lens
+model "Canon EF-S 18-55mm f/3.5-5.6 IS STM" with focal length of "18" and
+aperture of "8.0".
+
+ at example
+ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov
+ at end example
+
+ at item
+Apply the same as before, but only for the first 5 seconds of video.
+
+ at example
+ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8:enable='lte(t\,5)' -c:v h264 -b:v 8000k output.mov
+ at end example
+
+ at end itemize
+
@section libvmaf
Obtain the VMAF (Video Multi-Method Assessment Fusion)
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 7735c26529..c19848d203 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -391,6 +391,7 @@ OBJS-$(CONFIG_YADIF_FILTER) += vf_yadif.o
OBJS-$(CONFIG_ZMQ_FILTER) += f_zmq.o
OBJS-$(CONFIG_ZOOMPAN_FILTER) += vf_zoompan.o
OBJS-$(CONFIG_ZSCALE_FILTER) += vf_zscale.o
+OBJS-$(CONFIG_LENSFUN_FILTER) += vf_lensfun.o
OBJS-$(CONFIG_ALLRGB_FILTER) += vsrc_testsrc.o
OBJS-$(CONFIG_ALLYUV_FILTER) += vsrc_testsrc.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 0ded83ede2..521bc53164 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -237,6 +237,7 @@ extern AVFilter ff_vf_interlace;
extern AVFilter ff_vf_interleave;
extern AVFilter ff_vf_kerndeint;
extern AVFilter ff_vf_lenscorrection;
+extern AVFilter ff_vf_lensfun;
extern AVFilter ff_vf_libvmaf;
extern AVFilter ff_vf_limiter;
extern AVFilter ff_vf_loop;
diff --git a/libavfilter/vf_lensfun.c b/libavfilter/vf_lensfun.c
new file mode 100644
index 0000000000..732978b583
--- /dev/null
+++ b/libavfilter/vf_lensfun.c
@@ -0,0 +1,605 @@
+/*
+ * Copyright (C) 2007 by Andrew Zabolotny (author of lensfun, from which this filter derives from)
+ * Copyright (C) 2018 Stephen Seo
+ *
+ * This file is part of FFmpeg.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <https://www.gnu.org/licenses/>.
+ */
+
+/**
+ * @file
+ * Lensfun filter, applies lens correction with parameters from the lensfun database
+ *
+ * @see https://lensfun.sourceforge.net/
+ */
+
+#include <float.h>
+#include <math.h>
+
+#include "libavutil/avassert.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/opt.h"
+#include "libswscale/swscale.h"
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+#include "video.h"
+
+#include <lensfun.h>
+
+#define LANCZOS_RESOLUTION 256
+
+enum Mode {
+ VIGNETTING = 0x1,
+ GEOMETRY_DISTORTION = 0x2,
+ SUBPIXEL_DISTORTION = 0x4
+};
+
+enum InterpolationType {
+ NEAREST,
+ LINEAR,
+ LANCZOS
+};
+
+struct VignettingThreadData
+{
+ int width, height;
+ uint8_t *data_in;
+ int linesize_in;
+ int pixel_composition;
+ lfModifier *modifier;
+};
+
+struct DistortionCorrectionThreadData {
+ int width, height;
+ const float *distortion_coords;
+ const uint8_t *data_in;
+ uint8_t *data_out;
+ int linesize_in, linesize_out;
+ const float *interpolation;
+ int mode;
+ int interpolation_type;
+};
+
+typedef struct LensfunContext {
+ const AVClass *class;
+ const char* make;
+ const char* model;
+ const char* lens_model;
+ int mode;
+ float focal_length;
+ float aperture;
+ float focus_distance;
+ int target_geometry;
+ int reverse;
+ int interpolation_type;
+
+ float *distortion_coords;
+ float *interpolation;
+
+ lfLens *lens;
+ lfCamera *camera;
+ lfModifier *modifier;
+} LensfunContext;
+
+#define OFFSET(x) offsetof(LensfunContext, x)
+#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
+static const AVOption lensfun_options[] = {
+ { "make", "set camera maker", OFFSET(make), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
+ { "model", "set camera model", OFFSET(model), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
+ { "lens_model", "set lens model", OFFSET(lens_model), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
+ { "mode", "set mode", OFFSET(mode), AV_OPT_TYPE_INT, {.i64=GEOMETRY_DISTORTION}, 0, VIGNETTING | GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION, FLAGS, "mode" },
+ { "vignetting", "fix lens vignetting", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING}, 0, 0, FLAGS, "mode" },
+ { "geometry", "correct geometry distortion", 0, AV_OPT_TYPE_CONST, {.i64=GEOMETRY_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "subpixel", "fix chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "vig_geo", "fix lens vignetting and correct geometry distortion", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | GEOMETRY_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "vig_subpixel", "fix lens vignetting and chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "distortion", "correct geometry distortion and chromatic aberrations", 0, AV_OPT_TYPE_CONST, {.i64=GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "all", NULL, 0, AV_OPT_TYPE_CONST, {.i64=VIGNETTING | GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION}, 0, 0, FLAGS, "mode" },
+ { "focal_length", "focal length of video (zoom; expected constant)", OFFSET(focal_length), AV_OPT_TYPE_FLOAT, {.dbl=18}, 0.0, DBL_MAX, FLAGS },
+ { "aperture", "aperture (expected constant)", OFFSET(aperture), AV_OPT_TYPE_FLOAT, {.dbl=3.5}, 0.0, DBL_MAX, FLAGS },
+ { "focus_distance", "focus distance (expected constant)", OFFSET(focus_distance), AV_OPT_TYPE_FLOAT, {.dbl=1000.0f}, 0.0, DBL_MAX, FLAGS },
+ { "target_geometry", "target geometry of the lens correction (only when geometry correction is enabled)", OFFSET(target_geometry), AV_OPT_TYPE_INT, {.i64=LF_RECTILINEAR}, 0, INT_MAX, FLAGS, "lens_geometry" },
+ { "rectilinear", "rectilinear lens (default)", 0, AV_OPT_TYPE_CONST, {.i64=LF_RECTILINEAR}, 0, 0, FLAGS, "lens_geometry" },
+ { "fisheye", "fisheye lens", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE}, 0, 0, FLAGS, "lens_geometry" },
+ { "panoramic", "panoramic (cylindrical)", 0, AV_OPT_TYPE_CONST, {.i64=LF_PANORAMIC}, 0, 0, FLAGS, "lens_geometry" },
+ { "equirectangular", "equirectangular", 0, AV_OPT_TYPE_CONST, {.i64=LF_EQUIRECTANGULAR}, 0, 0, FLAGS, "lens_geometry" },
+ { "fisheye_orthographic", "orthographic fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_ORTHOGRAPHIC}, 0, 0, FLAGS, "lens_geometry" },
+ { "fisheye_stereographic", "stereographic fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_STEREOGRAPHIC}, 0, 0, FLAGS, "lens_geometry" },
+ { "fisheye_equisolid", "equisolid fisheye", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_EQUISOLID}, 0, 0, FLAGS, "lens_geometry" },
+ { "fisheye_thoby", "fisheye as measured by thoby", 0, AV_OPT_TYPE_CONST, {.i64=LF_FISHEYE_THOBY}, 0, 0, FLAGS, "lens_geometry" },
+ { "reverse", "Does reverse correction (regular image to lens distorted)", OFFSET(reverse), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS },
+ { "interpolation", "Type of interpolation", OFFSET(interpolation_type), AV_OPT_TYPE_INT, {.i64=LINEAR}, 0, LANCZOS, FLAGS, "interpolation" },
+ { "nearest", NULL, 0, AV_OPT_TYPE_CONST, {.i64=NEAREST}, 0, 0, FLAGS, "interpolation" },
+ { "linear", NULL, 0, AV_OPT_TYPE_CONST, {.i64=LINEAR}, 0, 0, FLAGS, "interpolation" },
+ { "lanczos", NULL, 0, AV_OPT_TYPE_CONST, {.i64=LANCZOS}, 0, 0, FLAGS, "interpolation" },
+ { NULL }
+};
+
+AVFILTER_DEFINE_CLASS(lensfun);
+
+static av_cold int init(AVFilterContext *ctx)
+{
+ LensfunContext *lensfun = ctx->priv;
+ lfDatabase *db;
+ const lfCamera **cameras;
+ const lfLens **lenses;
+
+ if(!lensfun->make)
+ {
+ av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"make\" not specified\n");
+ return AVERROR(EINVAL);
+ }
+ else if(!lensfun->model)
+ {
+ av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"model\" not specified\n");
+ return AVERROR(EINVAL);
+ }
+ else if(!lensfun->lens_model)
+ {
+ av_log(NULL, AV_LOG_FATAL, "ERROR vf_lensfun: Option \"lens_model\" not specified\n");
+ return AVERROR(EINVAL);
+ }
+
+ lensfun->lens = lf_lens_new();
+ lensfun->camera = lf_camera_new();
+
+ db = lf_db_new();
+ if(lf_db_load(db) != LF_NO_ERROR) {
+ lf_db_destroy(db);
+ av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to load lensfun database\n");
+ return AVERROR_INVALIDDATA;
+ }
+
+ cameras = lf_db_find_cameras(db, lensfun->make, lensfun->model);
+ if(cameras != NULL && *cameras != NULL)
+ {
+ lf_camera_copy(lensfun->camera, *cameras);
+ av_log(NULL, AV_LOG_INFO, "vf_lensfun: Using camera %s\n", lensfun->camera->Model);
+ }
+ else
+ {
+ lf_free(cameras);
+ lf_db_destroy(db);
+ av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to find camera in lensfun database\n");
+ return AVERROR_INVALIDDATA;
+ }
+ lf_free(cameras);
+
+ lenses = lf_db_find_lenses_hd(db, lensfun->camera, NULL, lensfun->lens_model, 0);
+ if(lenses != NULL && *lenses != NULL)
+ {
+ lf_lens_copy(lensfun->lens, *lenses);
+ av_log(NULL, AV_LOG_INFO, "vf_lensfun: Using lens %s\n", lensfun->lens->Model);
+ }
+ else
+ {
+ lf_free(lenses);
+ lf_db_destroy(db);
+ av_log(NULL, AV_LOG_FATAL, "vf_lensfun: Failed to find lens in lensfun database\n");
+ return AVERROR_INVALIDDATA;
+ }
+ lf_free(lenses);
+
+ lf_db_destroy(db);
+ return 0;
+}
+
+static int query_formats(AVFilterContext *ctx)
+{
+ // Some of the functions provided by lensfun require pixels in RGB format
+ static const enum AVPixelFormat fmts[] = {AV_PIX_FMT_RGB24, AV_PIX_FMT_NONE};
+ AVFilterFormats *fmts_list = ff_make_format_list(fmts);
+ return ff_set_common_formats(ctx, fmts_list);
+}
+
+static float lanczos_kernel(float x)
+{
+ if(x == 0.0f)
+ {
+ return 1.0f;
+ }
+ else if(x > -2.0f && x < 2.0f)
+ {
+ return (2.0f * sin(M_PI * x) * sin(M_PI / 2.0f * x)) / (M_PI * M_PI * x * x);
+ }
+ else
+ {
+ return 0.0f;
+ }
+}
+
+static int config_props(AVFilterLink *inlink)
+{
+ AVFilterContext *ctx = inlink->dst;
+ LensfunContext *lensfun = ctx->priv;
+ int index;
+ float a;
+ int lensfun_mode = 0;
+
+ if(!lensfun->modifier)
+ {
+ if(lensfun->camera && lensfun->lens)
+ {
+ lensfun->modifier = lf_modifier_new(lensfun->lens, lensfun->camera->CropFactor, inlink->w, inlink->h);
+ if(lensfun->mode & VIGNETTING)
+ {
+ lensfun_mode |= LF_MODIFY_VIGNETTING;
+ }
+ if(lensfun->mode & GEOMETRY_DISTORTION)
+ {
+ lensfun_mode |= LF_MODIFY_DISTORTION | LF_MODIFY_GEOMETRY | LF_MODIFY_SCALE;
+ }
+ if(lensfun->mode & SUBPIXEL_DISTORTION)
+ {
+ lensfun_mode |= LF_MODIFY_TCA;
+ }
+ lf_modifier_initialize(lensfun->modifier, lensfun->lens, LF_PF_U8, lensfun->focal_length, lensfun->aperture, lensfun->focus_distance, 0.0, lensfun->target_geometry, lensfun_mode, lensfun->reverse);
+ }
+ else
+ {
+ return AVERROR_INVALIDDATA;
+ }
+ }
+
+ if(!lensfun->distortion_coords)
+ {
+ if(lensfun->mode & SUBPIXEL_DISTORTION)
+ {
+ lensfun->distortion_coords = malloc(sizeof(float) * inlink->w * inlink->h * 2 * 3);
+ if(lensfun->mode & GEOMETRY_DISTORTION)
+ {
+ // apply both geometry and subpixel distortion
+ lf_modifier_apply_subpixel_geometry_distortion(lensfun->modifier, 0, 0, inlink->w, inlink->h, lensfun->distortion_coords);
+ }
+ else
+ {
+ // apply only subpixsel distortion
+ lf_modifier_apply_subpixel_distortion(lensfun->modifier, 0, 0, inlink->w, inlink->h, lensfun->distortion_coords);
+ }
+ }
+ else if(lensfun->mode & GEOMETRY_DISTORTION)
+ {
+ lensfun->distortion_coords = malloc(sizeof(float) * inlink->w * inlink->h * 2);
+ // apply only geometry distortion
+ lf_modifier_apply_geometry_distortion(lensfun->modifier, 0, 0, inlink->w, inlink->h, lensfun->distortion_coords);
+ }
+ }
+
+ if(!lensfun->interpolation)
+ {
+ if(lensfun->interpolation_type == LANCZOS)
+ {
+ lensfun->interpolation = malloc(sizeof(float) * 4 * LANCZOS_RESOLUTION);
+ for(index = 0; index < 4 * LANCZOS_RESOLUTION; ++index)
+ {
+ a = sqrt((float)index / LANCZOS_RESOLUTION);
+ if(a == 0.0f)
+ {
+ lensfun->interpolation[index] = 1.0f;
+ }
+ else
+ {
+ lensfun->interpolation[index] = lanczos_kernel(a);
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int vignetting_filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+ const struct VignettingThreadData *thread_data = arg;
+ const int slice_start = thread_data->height * jobnr / nb_jobs;
+ const int slice_end = thread_data->height * (jobnr + 1) / nb_jobs;
+
+ lf_modifier_apply_color_modification(thread_data->modifier, thread_data->data_in + slice_start * thread_data->linesize_in, 0, slice_start, thread_data->width, slice_end - slice_start, thread_data->pixel_composition, thread_data->linesize_in);
+
+ return 0;
+}
+
+static float square(float x)
+{
+ return x * x;
+}
+
+static int distortion_correction_filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
+{
+ const struct DistortionCorrectionThreadData *thread_data = arg;
+ const int slice_start = thread_data->height * jobnr / nb_jobs;
+ const int slice_end = thread_data->height * (jobnr + 1) / nb_jobs;
+
+ int x, y, i, j, rgb_index;
+ float interpolated, new_x, new_y, d, norm;
+ int new_x_int, new_y_int;
+ for(y = slice_start; y < slice_end; ++y)
+ {
+ for(x = 0; x < thread_data->width; ++x)
+ {
+ for(rgb_index = 0; rgb_index < 3; ++rgb_index)
+ {
+ if(thread_data->mode & SUBPIXEL_DISTORTION)
+ {
+ // subpixel (and possibly geometry) distortion correction was applied, correct distortion
+ switch(thread_data->interpolation_type)
+ {
+ case NEAREST:
+ new_x_int = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2] + 0.5f;
+ new_y_int = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1] + 0.5f;
+ if(new_x_int < 0 || new_x_int >= thread_data->width || new_y_int < 0 || new_y_int >= thread_data->height)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in];
+ }
+ break;
+ case LINEAR:
+ interpolated = 0.0f;
+ new_x = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2];
+ new_x_int = new_x;
+ new_y = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1];
+ new_y_int = new_y;
+ if(new_x_int < 0 || new_x_int + 1 >= thread_data->width || new_y_int < 0 || new_y_int + 1 >= thread_data->height)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] =
+ thread_data->data_in[ new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y_int + 1 - new_y)
+ + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x - new_x_int) * (new_y_int + 1 - new_y)
+ + thread_data->data_in[ new_x_int * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y - new_y_int)
+ + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x - new_x_int) * (new_y - new_y_int);
+ }
+ break;
+ case LANCZOS:
+ interpolated = 0.0f;
+ norm = 0.0f;
+ new_x = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2];
+ new_x_int = new_x;
+ new_y = thread_data->distortion_coords[x * 2 * 3 + y * thread_data->width * 2 * 3 + rgb_index * 2 + 1];
+ new_y_int = new_y;
+ for(j = 0; j < 4; ++j)
+ {
+ for(i = 0; i < 4; ++i)
+ {
+ if(new_x_int + i - 2 < 0 || new_x_int + i - 2 >= thread_data->width
+ || new_y_int + j - 2 < 0 || new_y_int + j - 2 >= thread_data->height)
+ {
+ continue;
+ }
+ d = square(new_x - (new_x_int + i - 2)) * square(new_y - (new_y_int + j - 2));
+ if(d >= 4.0f)
+ {
+ continue;
+ }
+ d = thread_data->interpolation[(int)(d * LANCZOS_RESOLUTION)];
+ norm += d;
+ interpolated += thread_data->data_in[(new_x_int + i - 2) * 3 + rgb_index + (new_y_int + j - 2) * thread_data->linesize_in] * d;
+ }
+ }
+ if(norm == 0.0f)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ interpolated /= norm;
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = interpolated < 0.0f ? 0.0f : interpolated > 255.0f ? 255 : interpolated;
+ }
+ break;
+ }
+ }
+ else if(thread_data->mode & GEOMETRY_DISTORTION)
+ {
+ // geometry distortion correction was applied, correct distortion
+ switch(thread_data->interpolation_type)
+ {
+ case NEAREST:
+ new_x_int = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2] + 0.5f;
+ new_y_int = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2 + 1] + 0.5f;
+ if(new_x_int < 0 || new_x_int >= thread_data->width || new_y_int < 0 || new_y_int >= thread_data->height)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in];
+ }
+ break;
+ case LINEAR:
+ interpolated = 0.0f;
+ new_x = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2];
+ new_x_int = new_x;
+ new_y = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2 + 1];
+ new_y_int = new_y;
+ if(new_x_int < 0 || new_x_int + 1 >= thread_data->width || new_y_int < 0 || new_y_int + 1 >= thread_data->height)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] =
+ thread_data->data_in[ new_x_int * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y_int + 1 - new_y)
+ + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + new_y_int * thread_data->linesize_in] * (new_x - new_x_int) * (new_y_int + 1 - new_y)
+ + thread_data->data_in[ new_x_int * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x_int + 1 - new_x) * (new_y - new_y_int)
+ + thread_data->data_in[(new_x_int + 1) * 3 + rgb_index + (new_y_int + 1) * thread_data->linesize_in] * (new_x - new_x_int) * (new_y - new_y_int);
+ }
+ break;
+ case LANCZOS:
+ interpolated = 0.0f;
+ norm = 0.0f;
+ new_x = thread_data->distortion_coords[x * 2 + y * thread_data->width * 2];
+ new_x_int = new_x;
+ new_y = thread_data->distortion_coords[x * 2 + 1 + y * thread_data->width * 2];
+ new_y_int = new_y;
+ for(j = 0; j < 4; ++j)
+ {
+ for(i = 0; i < 4; ++i)
+ {
+ if(new_x_int + i - 2 < 0 || new_x_int + i - 2 >= thread_data->width
+ || new_y_int + j - 2 < 0 || new_y_int + j - 2 >= thread_data->height)
+ {
+ continue;
+ }
+ d = square(new_x - (new_x_int + i - 2)) * square(new_y - (new_y_int + j - 2));
+ if(d >= 4.0f)
+ {
+ continue;
+ }
+ d = thread_data->interpolation[(int)(d * LANCZOS_RESOLUTION)];
+ norm += d;
+ interpolated += thread_data->data_in[(new_x_int + i - 2) * 3 + rgb_index + (new_y_int + j - 2) * thread_data->linesize_in] * d;
+ }
+ }
+ if(norm == 0.0f)
+ {
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = 0;
+ }
+ else
+ {
+ interpolated /= norm;
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = interpolated < 0.0f ? 0.0f : interpolated > 255.0f ? 255 : interpolated;
+ }
+ break;
+ }
+ }
+ else
+ {
+ // no distortion correction was applied
+ thread_data->data_out[x * 3 + rgb_index + y * thread_data->linesize_out] = thread_data->data_in[x * 3 + rgb_index + y * thread_data->linesize_in];
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int filter_frame(AVFilterLink *inlink, AVFrame *in)
+{
+ AVFilterContext *ctx = inlink->dst;
+ LensfunContext *lensfun = ctx->priv;
+ AVFilterLink *outlink = ctx->outputs[0];
+ AVFrame *out;
+ struct VignettingThreadData vignetting_thread_data;
+ struct DistortionCorrectionThreadData distortion_correction_thread_data;
+
+ if(lensfun->mode & VIGNETTING)
+ {
+ av_frame_make_writable(in);
+
+ vignetting_thread_data.width = inlink->w;
+ vignetting_thread_data.height = inlink->h;
+ vignetting_thread_data.data_in = in->data[0];
+ vignetting_thread_data.linesize_in = in->linesize[0];
+ vignetting_thread_data.pixel_composition = LF_CR_3(RED, GREEN, BLUE);
+ vignetting_thread_data.modifier = lensfun->modifier;
+
+ ctx->internal->execute(ctx, vignetting_filter_slice, &vignetting_thread_data, NULL, FFMIN(outlink->h, ctx->graph->nb_threads));
+ }
+
+ if(lensfun->mode & (GEOMETRY_DISTORTION | SUBPIXEL_DISTORTION))
+ {
+ out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
+ if(!out)
+ {
+ av_frame_free(&in);
+ return AVERROR(ENOMEM);
+ }
+ av_frame_copy_props(out, in);
+
+ distortion_correction_thread_data.width = inlink->w;
+ distortion_correction_thread_data.height = inlink->h;
+ distortion_correction_thread_data.distortion_coords = lensfun->distortion_coords;
+ distortion_correction_thread_data.data_in = in->data[0];
+ distortion_correction_thread_data.data_out = out->data[0];
+ distortion_correction_thread_data.linesize_in = in->linesize[0];
+ distortion_correction_thread_data.linesize_out = out->linesize[0];
+ distortion_correction_thread_data.interpolation = lensfun->interpolation;
+ distortion_correction_thread_data.mode = lensfun->mode;
+ distortion_correction_thread_data.interpolation_type = lensfun->interpolation_type;
+
+ ctx->internal->execute(ctx, distortion_correction_filter_slice, &distortion_correction_thread_data, NULL, FFMIN(outlink->h, ctx->graph->nb_threads));
+
+ av_frame_free(&in);
+ return ff_filter_frame(outlink, out);
+ }
+ else
+ {
+ return ff_filter_frame(outlink, in);
+ }
+}
+
+static av_cold void uninit(AVFilterContext *ctx)
+{
+ LensfunContext *lensfun = ctx->priv;
+
+ if(lensfun->camera)
+ {
+ lf_camera_destroy(lensfun->camera);
+ }
+ if(lensfun->lens)
+ {
+ lf_lens_destroy(lensfun->lens);
+ }
+ if(lensfun->modifier)
+ {
+ lf_modifier_destroy(lensfun->modifier);
+ }
+ if(lensfun->distortion_coords)
+ {
+ free(lensfun->distortion_coords);
+ }
+ if(lensfun->interpolation)
+ {
+ free(lensfun->interpolation);
+ }
+}
+
+static const AVFilterPad lensfun_inputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ .config_props = config_props,
+ .filter_frame = filter_frame,
+ },
+ { NULL }
+};
+
+static const AVFilterPad lensfun_outputs[] = {
+ {
+ .name = "default",
+ .type = AVMEDIA_TYPE_VIDEO,
+ },
+ { NULL }
+};
+
+AVFilter ff_vf_lensfun = {
+ .name = "lensfun",
+ .description = NULL_IF_CONFIG_SMALL("Apply correction to an image based on info derived from the lensfun database."),
+ .priv_size = sizeof(LensfunContext),
+ .init = init,
+ .uninit = uninit,
+ .query_formats = query_formats,
+ .inputs = lensfun_inputs,
+ .outputs = lensfun_outputs,
+ .priv_class = &lensfun_class,
+ .flags = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SLICE_THREADS,
+};
--
2.18.0
More information about the ffmpeg-devel
mailing list