[FFmpeg-devel] [PATCH] avfilter/vf_xcam: add xcam video filter

Zong, Wei wei.zong at intel.com
Fri Jul 31 10:52:04 EEST 2020


Thank you for the comments on this patch, I want to make some explanation on your comments.

1. the purpose of this patch
     We got some requirements from users who are trying libxcam stitching or developers who showing interests. Many of them want an easy way to construct an end 2 end pipeline of high quality immersive media application.
    The key points are high resolution and low latency, FFMpeg provides high quality video codec framework, if we implement video stitching as a video filter in FFMpeg,
they will get benefits.

2. 3rd party dependency
    The functions exposed by xcam filter only depends on essential build tool-chain.
According to hardware configuration, user can get performance improvement by install OCL, GLES or Vulkan.
These libraries are not necessary for video stitching, user can build a CPU version without these libraries.
    Beside these, stitching and DVS function depend on feature match algorithm from OpenCV library to get better quality, I posted demo link in commit message.
User can run stitching without install OpenCV, the side effect is the quality will decrease.

3. Immersive media standard
    Normally the camera manufactures provide video stitching software, they defined private camera calibration parameters, these are very critical to stitching quality. I know MPEG is making proposal on transfer these meta-data in video stream, this will help compose immersive media on cloud or edge server. I think it's a good idea to
create such a video filter in FFMpeg pipeline.

4. Why not implement stitching inside FFMpeg video filter
    The algorithms are complexity and the performance are also important, the stitching function manage computing resource for example thread handling, image data block dispatching. Personally I think we don't need to construct a wheel from the beginning.

> -----Original Message-----
> From: Zong, Wei <wei.zong at intel.com>
> Sent: Friday, July 31, 2020 10:57 PM
> To: ffmpeg-devel at ffmpeg.org
> Cc: Liu, YinhangX <yinhangx.liu at intel.com>; Zong, Wei <wei.zong at intel.com>
> Subject: [PATCH] avfilter/vf_xcam: add xcam video filter
> 
> From: Yinhang Liu <yinhangx.liu at intel.com>
> 
> xcam filter is a wrapper of libxcam project which supports 360 video stitching,
> automotive surround view stitching, digital video stabilization (DVS), wavelet
> noise reduction, 3D noise reduction, etc.
> 
> libxcam library optimized algorithms by AVX, GLES and Vulkan depends on
> hardware configuration to get good performance.
> 
> Here are demos of xcam filter
>  * video stitching:
>   https://github.com/intel/libxcam/wiki#libxcam-stitch-processing-flow
>   https://www.youtube.com/watch?v=J2Cr09KfIQ0
>   https://www.youtube.com/watch?v=z5MvzhRKcE8
> 
>  * digital video stabilization:
>   https://www.youtube.com/watch?v=f5JFhZPa4Xc
> 
> libxcam project is hosted at: https://github.com/intel/libxcam
> 
> Build instruction is at: https://github.com/intel/libxcam/wiki/Build
> 
> To enable xcam video filter, configure FFmpeg with option --enable-libxcam
> 
> Here are features provided by xcam video filter:
> - stitch     CPU|GLES|Vulkan stitching
> - stitchcl   OpenCL stitching
> - fisheye    Fisheye calibration
> - 3dnr       3D denoising
> - waveletnr  Wavelet denoising
> - dvs        Digital video stabilization
> - defog      Fog removal
> 
> Use 'ffmpeg -h filter=xcam' to get the common parameters, use
> 'params=help=1' to get the private parameters.
> Detailed test cases at:
> https://github.com/intel/libxcam/wiki/Tests#1-ffmpeg-xcam
> 
> Signed-off-by: Yinhang Liu <yinhangx.liu at intel.com>
> Signed-off-by: Zong Wei <wei.zong at intel.com>
> Reviewed-by: Zong Wei <wei.zong at intel.com>
> ---
>  Changelog                |   1 +
>  configure                |   5 +
>  doc/filters.texi         |  79 ++++++++++
>  libavfilter/Makefile     |   1 +
>  libavfilter/allfilters.c |   1 +
>  libavfilter/vf_xcam.c    | 326 +++++++++++++++++++++++++++++++++++++++
>  6 files changed, 413 insertions(+)
>  create mode 100644 libavfilter/vf_xcam.c
> 
> diff --git a/Changelog b/Changelog
> index 6f648bff2b..1d5872e216 100644
> --- a/Changelog
> +++ b/Changelog
> @@ -10,6 +10,7 @@ version <next>:
>  - ADPCM IMA Ubisoft APM encoder
>  - Rayman 2 APM muxer
>  - AV1 encoding support SVT-AV1
> +- xcam filter
> 
> 
>  version 4.3:
> diff --git a/configure b/configure
> index 169f23e17f..7664a077b3 100755
> --- a/configure
> +++ b/configure
> @@ -290,6 +290,7 @@ External library support:
>    --enable-libx265         enable HEVC encoding via x265 [no]
>    --enable-libxavs         enable AVS encoding via xavs [no]
>    --enable-libxavs2        enable AVS2 encoding via xavs2 [no]
> +  --enable-libxcam         enable image processing via xcam [no]
>    --enable-libxcb          enable X11 grabbing using XCB [autodetect]
>    --enable-libxcb-shm      enable X11 grabbing shm communication [autodetect]
>    --enable-libxcb-xfixes   enable X11 grabbing mouse rendering [autodetect]
> @@ -1817,6 +1818,7 @@ EXTERNAL_LIBRARY_LIST="
>      libvpx
>      libwavpack
>      libwebp
> +    libxcam
>      libxml2
>      libzimg
>      libzmq
> @@ -3639,6 +3641,7 @@ scale_vaapi_filter_deps="vaapi"
>  scale_vulkan_filter_deps="vulkan libglslang"
>  vpp_qsv_filter_deps="libmfx"
>  vpp_qsv_filter_select="qsvvpp"
> +xcam_filter_deps="libxcam"
>  xfade_opencl_filter_deps="opencl"
>  yadif_cuda_filter_deps="ffnvcodec"
>  yadif_cuda_filter_deps_any="cuda_nvcc cuda_llvm"
> @@ -6451,6 +6454,8 @@ enabled libx265           && require_pkg_config libx265
> x265 x265.h x265_api_get
>                               require_cpp_condition libx265 x265.h "X265_BUILD >= 70"
>  enabled libxavs           && require libxavs "stdint.h xavs.h" xavs_encoder_encode
> "-lxavs $pthreads_extralibs $libm_extralibs"
>  enabled libxavs2          && require_pkg_config libxavs2 "xavs2 >= 1.3.0" "stdint.h
> xavs2.h" xavs2_api_get
> +enabled libxcam           && { check_pkg_config libxcam "libxcam >= 1.4.0"
> "capi/xcam_handle.h" xcam_create_handle ||
> +                               die "ERROR: libXCam must be installed
> +and version must be >= 1.4.0"; }
>  enabled libxvid           && require libxvid xvid.h xvid_global -lxvidcore
>  enabled libzimg           && require_pkg_config libzimg "zimg >= 2.7.0" zimg.h
> zimg_get_api_version
>  enabled libzmq            && require_pkg_config libzmq "libzmq >= 4.2.1" zmq.h
> zmq_ctx_new
> diff --git a/doc/filters.texi b/doc/filters.texi index 561aa98a9d..ab6f231025
> 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -20399,6 +20399,85 @@ Set the scaling dimension: @code{2} for
> @code{2xBR}, @code{3} for  Default is @code{3}.
>  @end table
> 
> + at section xcam
> +Image processing supported through libXCam.
> +
> +libXCam supports automotive surround view stitching, 360 video
> +stitching, digital video stabilization, noise reduction and so on. For
> +more information about libxcam see @url{https://github.com/intel/libxcam}.
> +
> +Please refer to @url{https://github.com/intel/libxcam/wiki/Build} to
> +build libxcam.
> +
> +To enable compilation of @var{xcam} video filter you need to configure
> +FFmpeg with @code{--enable-libxcam}.
> +
> + at subsection Options
> +
> + at table @option
> +
> + at item inputs
> +The number of inputs. Default is @code{1}. @b{stitch} and @b{stitchcl}
> +handlers support dynamic inputs, @b{fisheye}, @b{3dnr}, @b{waveletnr},
> + at b{dvs} and @b{defog} handlers support one input.
> +
> + at item w
> +Output video width. Default is @code{0}.
> +If the value is 0, the corresponding input width is used for the output.
> +
> + at item h
> +Output video height. Default is @code{0}.
> +If the value is 0, the corresponding input height is used for the output.
> +
> + at item fmt
> +Pixel format. Default is @code{nv12}.
> +
> + at table @samp
> + at item auto
> +Negotiate pixel format automatically, selects the input pixel format as
> +the processing pixel format.
> + at item nv12
> +NV12 pixel format. All handlers support NV12 pixel format.
> + at item yuv420
> +YUV420 pixel format. Currently, only @b{CPU} stitching supports YUV420
> +pixel format.
> + at end table
> +
> + at item name
> +Handler name. Default is @code{stitch}.
> +
> + at table @samp
> + at item stitch
> +CPU|GLES|Vulkan stitching, supports automotive surround view stitching
> +CPU|GLES|and
> +360 video stitching.
> + at item stitchcl
> +OpenCL stitching, supports automotive surround view stitching and 360
> +video stitching.
> + at item fisheye
> +Fisheye calibration
> + at item 3dnr
> +3D denoising
> + at item waveletnr
> +Wavelet denoising
> + at item dvs
> +Digital video stabilization
> + at item defog
> +Fog removal
> + at end table
> +
> + at item params
> +Private parameters for each handler. Currently, only @b{stitch} and
> + at b{stitchcl} handlers have private parameters.
> +
> + at end table
> +
> + at subsection Examples
> +Use @code{ffmpeg -h filter=xcam} to get the common parameters,
> + at b{stitch} and @b{stitchcl} handlers have private parameters, use
> + at code{params=help=1} to get the private parameters.
> +
> +For more detailed examples see
> @url{https://github.com/intel/libxcam/wiki/Tests#1-ffmpeg-xcam}.
> +
>  @section xfade
> 
>  Apply cross fade from one input video stream to another input video stream.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile index
> 0dc74f8b70..4f9acd7770 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -456,6 +456,7 @@ OBJS-$(CONFIG_W3FDIF_FILTER)                 +=
> vf_w3fdif.o
>  OBJS-$(CONFIG_WAVEFORM_FILTER)               += vf_waveform.o
>  OBJS-$(CONFIG_WEAVE_FILTER)                  += vf_weave.o
>  OBJS-$(CONFIG_XBR_FILTER)                    += vf_xbr.o
> +OBJS-$(CONFIG_XCAM_FILTER)                   += vf_xcam.o
>  OBJS-$(CONFIG_XFADE_FILTER)                  += vf_xfade.o
>  OBJS-$(CONFIG_XFADE_OPENCL_FILTER)           += vf_xfade_opencl.o opencl.o
> opencl/xfade.o
>  OBJS-$(CONFIG_XMEDIAN_FILTER)                += vf_xmedian.o framesync.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index
> 3f70153986..728c11b303 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -435,6 +435,7 @@ extern AVFilter ff_vf_w3fdif;  extern AVFilter
> ff_vf_waveform;  extern AVFilter ff_vf_weave;  extern AVFilter ff_vf_xbr;
> +extern AVFilter ff_vf_xcam;
>  extern AVFilter ff_vf_xfade;
>  extern AVFilter ff_vf_xfade_opencl;
>  extern AVFilter ff_vf_xmedian;
> diff --git a/libavfilter/vf_xcam.c b/libavfilter/vf_xcam.c new file mode 100644
> index 0000000000..d3384f6f74
> --- /dev/null
> +++ b/libavfilter/vf_xcam.c
> @@ -0,0 +1,326 @@
> +/*
> + * Copyright (c) 2020 Intel Corporation.
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> +02110-1301 USA  */
> +
> +/**
> + * @file
> + * libXCam wrapper functions
> + */
> +
> +#include <xcam/capi/xcam_handle.h>
> +#include "libavutil/avstring.h"
> +#include "libavutil/opt.h"
> +#include "framesync.h"
> +#include "internal.h"
> +
> +typedef struct XCamVideoFilterBuf {
> +    XCamVideoBuffer buf;
> +    AVFrame *frame;
> +} XCamVideoFilterBuf;
> +
> +typedef struct XCAMContext {
> +    const AVClass *class;
> +
> +    int nb_inputs;
> +    int w;
> +    int h;
> +    char *fmt;
> +    char *name;
> +    char *params;
> +
> +    XCamHandle *handle;
> +    uint32_t v4l2_fmt;
> +
> +    XCamVideoFilterBuf **inbufs;
> +    XCamVideoFilterBuf *outbuf;
> +
> +    FFFrameSync fs;
> +} XCAMContext;
> +
> +static uint8_t *xcambuf_map(XCamVideoBuffer *buf) {
> +    XCamVideoFilterBuf *avfilter_buf = (XCamVideoFilterBuf *)(buf);
> +    return avfilter_buf->frame->data[0]; }
> +
> +static void xcambuf_unmap(XCamVideoBuffer *buf) {
> +    return;
> +}
> +
> +static uint32_t avfmt_to_v4l2fmt(AVFilterContext *ctx, int avfmt) {
> +    uint32_t v4l2fmt = 0;
> +    if (avfmt == AV_PIX_FMT_YUV420P)
> +        v4l2fmt = V4L2_PIX_FMT_YUV420;
> +    else if (avfmt == AV_PIX_FMT_NV12)
> +        v4l2fmt = V4L2_PIX_FMT_NV12;
> +    else
> +        av_log(ctx, AV_LOG_ERROR, "unsupported pixel format %d\n",
> +avfmt);
> +
> +    return v4l2fmt;
> +}
> +
> +static int set_parameters(AVFilterContext *ctx, const AVFilterLink
> +*inlink, const AVFilterLink *outlink) {
> +    XCAMContext *s = inlink->dst->priv;
> +
> +    char params[XCAM_MAX_PARAMS_LENGTH] = { 0 };
> +    snprintf(params, XCAM_MAX_PARAMS_LENGTH - 1, "inw=%d inh=%d
> outw=%d outh=%d fmt=%d %s",
> +        inlink->w, inlink->h, outlink->w, outlink->h, s->v4l2_fmt,
> + s->params);
> +
> +    if (xcam_handle_set_parameters(s->handle, params) !=
> XCAM_RETURN_NO_ERROR) {
> +        av_log(ctx, AV_LOG_ERROR, "xcam handler set parameters failed\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    return 0;
> +}
> +
> +static int init_xcambuf_info(XCAMContext *s, XCamVideoBuffer *buf,
> +AVFrame *frame) {
> +    XCamReturn ret = xcam_video_buffer_info_reset(
> +        &buf->info, s->v4l2_fmt, frame->width, frame->height, frame->linesize[0],
> frame->height, 0);
> +    if (ret != XCAM_RETURN_NO_ERROR)
> +        return AVERROR(EINVAL);
> +
> +    for (int i = 0; frame->linesize[i]; i++) {
> +        buf->info.offsets[i] = frame->data[i] - frame->data[0];
> +        buf->info.strides[i] = frame->linesize[i];
> +    }
> +
> +    return 0;
> +}
> +
> +static int xcam_execute(FFFrameSync *fs) {
> +    AVFilterContext *ctx = fs->parent;
> +    XCAMContext *s = fs->opaque;
> +    AVFilterLink *outlink;
> +    AVFrame *outframe, *frame;
> +
> +    XCamVideoFilterBuf **inbufs = s->inbufs;
> +    XCamVideoFilterBuf *outbuf = s->outbuf;
> +
> +    for (int i = 0; i < ctx->nb_inputs; i++) {
> +        int error = ff_framesync_get_frame(&s->fs, i, &frame, 0);
> +        if (error < 0)
> +            return error;
> +        if (init_xcambuf_info(s, &inbufs[i]->buf, frame) != 0)
> +            return AVERROR(EINVAL);
> +        inbufs[i]->frame = frame;
> +    }
> +
> +    outlink = ctx->outputs[0];
> +    if (!(outframe = ff_get_video_buffer(outlink, outlink->w, outlink->h))) {
> +        av_frame_free(&frame);
> +        return AVERROR(ENOMEM);
> +    }
> +    outframe->pts = av_rescale_q(s->fs.pts, s->fs.time_base,
> + outlink->time_base);
> +
> +    if (init_xcambuf_info(s, &outbuf->buf, outframe) != 0)
> +        return AVERROR(EINVAL);
> +    outbuf->frame = outframe;
> +
> +    if (xcam_handle_execute(s->handle, (XCamVideoBuffer **)inbufs,
> (XCamVideoBuffer **)&outbuf) != XCAM_RETURN_NO_ERROR) {
> +        av_log(ctx, AV_LOG_ERROR, "execute xcam handler failed\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    return ff_filter_frame(outlink, outframe); }
> +
> +static int xcam_query_formats(AVFilterContext *ctx) {
> +    XCAMContext *s = ctx->priv;
> +    AVFilterFormats *formats = NULL;
> +
> +    static const enum AVPixelFormat nv12_fmts[] = {AV_PIX_FMT_NV12,
> AV_PIX_FMT_NONE};
> +    static const enum AVPixelFormat yuv420_fmts[] = {AV_PIX_FMT_YUV420P,
> AV_PIX_FMT_NONE};
> +    static const enum AVPixelFormat auto_fmts[] = {AV_PIX_FMT_NV12,
> + AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
> +
> +    const enum AVPixelFormat *pix_fmts = NULL;
> +    if (!av_strcasecmp(s->fmt, "nv12"))
> +        pix_fmts = nv12_fmts;
> +    else if (!av_strcasecmp(s->fmt, "yuv420"))
> +        pix_fmts = yuv420_fmts;
> +    else
> +        pix_fmts = auto_fmts;
> +
> +    if (!(formats = ff_make_format_list(pix_fmts)))
> +        return AVERROR(ENOMEM);
> +
> +    return ff_set_common_formats(ctx, formats); }
> +
> +static int xcam_config_output(AVFilterLink *outlink) {
> +    AVFilterContext *ctx = outlink->src;
> +    XCAMContext *s = ctx->priv;
> +    AVFilterLink *inlink = ctx->inputs[0];
> +    int ret = 0;
> +
> +    s->v4l2_fmt = avfmt_to_v4l2fmt(ctx, inlink->format);
> +    if (s->v4l2_fmt == 0)
> +        return AVERROR(EINVAL);
> +
> +    if (s->w && s->h) {
> +        outlink->w = s->w;
> +        outlink->h = s->h;
> +    } else {
> +        outlink->w = inlink->w;
> +        outlink->h = inlink->h;
> +    }
> +
> +    set_parameters(ctx, inlink, outlink);
> +    if (xcam_handle_init(s->handle) != XCAM_RETURN_NO_ERROR) {
> +        av_log(ctx, AV_LOG_ERROR, "init xcam handler failed\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if ((ret = ff_framesync_init(&s->fs, ctx, ctx->nb_inputs)) < 0)
> +        return ret;
> +    s->fs.opaque = s;
> +    s->fs.on_event = xcam_execute;
> +    for (int i = 0; i < ctx->nb_inputs; i++) {
> +        FFFrameSyncIn *in = &s->fs.in[i];
> +        in->time_base = ctx->inputs[i]->time_base;
> +        in->sync      = 1;
> +        in->before    = EXT_STOP;
> +        in->after     = EXT_STOP;
> +    }
> +    ret = ff_framesync_configure(&s->fs);
> +    outlink->time_base = s->fs.time_base;
> +
> +    return ret;
> +}
> +
> +static av_cold int xcam_init(AVFilterContext *ctx) {
> +    XCAMContext *s = ctx->priv;
> +    int ret = 0;
> +
> +    s->handle = xcam_create_handle(s->name);
> +    if (!s->handle) {
> +        av_log(ctx, AV_LOG_ERROR, "create xcam handler failed\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    s->inbufs = av_mallocz_array(s->nb_inputs + 1, sizeof(XCamVideoFilterBuf *));
> +    for (int i = 0; i < s->nb_inputs; i++) {
> +        s->inbufs[i] = av_mallocz_array(1, sizeof(XCamVideoFilterBuf));
> +        if (!s->inbufs[i])
> +            return AVERROR(ENOMEM);
> +        s->inbufs[i]->buf.map = xcambuf_map;
> +        s->inbufs[i]->buf.unmap = xcambuf_unmap;
> +    }
> +
> +    s->outbuf = av_mallocz_array(1, sizeof(XCamVideoFilterBuf));
> +    s->outbuf->buf.map = xcambuf_map;
> +    s->outbuf->buf.unmap = xcambuf_unmap;
> +
> +    for (int i = 0; i < s->nb_inputs; i++) {
> +        AVFilterPad pad = { .type = AVMEDIA_TYPE_VIDEO };
> +        pad.name = av_asprintf("input%d", i);
> +        if (!pad.name)
> +            return AVERROR(ENOMEM);
> +
> +        if ((ret = ff_insert_inpad(ctx, i, &pad)) < 0) {
> +            av_freep(&pad.name);
> +            return ret;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +static av_cold void xcam_uninit(AVFilterContext *ctx) {
> +    XCAMContext *s = ctx->priv;
> +
> +    ff_framesync_uninit(&s->fs);
> +
> +    for (int i = 0; i < s->nb_inputs; i++) {
> +        if (s->inbufs && s->inbufs[i])
> +            av_freep(&s->inbufs[i]);
> +        if (ctx->input_pads)
> +            av_freep(&ctx->input_pads[i].name);
> +    }
> +    av_freep(&s->inbufs);
> +    av_freep(&s->outbuf);
> +
> +    xcam_destroy_handle(s->handle);
> +    s->handle = NULL;
> +}
> +
> +static int xcam_activate(AVFilterContext *ctx) {
> +    XCAMContext *s = ctx->priv;
> +    return ff_framesync_activate(&s->fs); }
> +
> +#define OFFSET(x) offsetof(XCAMContext, x) #define FLAGS
> +AV_OPT_FLAG_VIDEO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> +#define CONST_STRING(name, help, unit) \
> +    { name, help, 0, AV_OPT_TYPE_CONST, { .str=name }, 0, 0, FLAGS,
> +unit }
> +
> +static const AVOption xcam_options[] = {
> +    { "inputs", "number of inputs", OFFSET(nb_inputs), AV_OPT_TYPE_INT, { .i64
> = 1 }, 1, XCAM_MAX_INPUTS_NUM, FLAGS },
> +    { "w",  "output width", OFFSET(w), AV_OPT_TYPE_INT, { .i64 = 0 }, 0,
> INT_MAX, FLAGS },
> +    { "h", "output height", OFFSET(h), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX,
> FLAGS },
> +    { "fmt", "pixel format", OFFSET(fmt), AV_OPT_TYPE_STRING, { .str = "nv12" },
> 0, 0, FLAGS, "fmt" },
> +        CONST_STRING("auto",   "automatic pixel format negotiation",
> "fmt"),
> +        CONST_STRING("nv12",   "NV12 pixel format, all handlers support NV12
> pixel format",            "fmt"),
> +        CONST_STRING("yuv420", "YUV420 pixel format, only CPU stitching
> supports YUV420 pixel format", "fmt"),
> +    { "name",   "handler name", OFFSET(name), AV_OPT_TYPE_STRING, { .str =
> "stitch" }, 0, 0, FLAGS, "name" },
> +        CONST_STRING("stitch",    "CPU|GLES|Vulkan stitching", "name"),
> +        CONST_STRING("stitchcl",  "OpenCL stitching",          "name"),
> +        CONST_STRING("fisheye",   "fisheye calibration",       "name"),
> +        CONST_STRING("3dnr",      "3d denoising",              "name"),
> +        CONST_STRING("waveletnr", "wavelet denoising",         "name"),
> +        CONST_STRING("dvs",       "digital video stabilizer",  "name"),
> +        CONST_STRING("defog",     "fog removal",               "name"),
> +    { "params", "private parameters for each handle, usage: params=help=1
> field0=value0 field1=value1 ...",
> +        OFFSET(params), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, FLAGS },
> +    { NULL }
> +};
> +
> +AVFILTER_DEFINE_CLASS(xcam);
> +
> +static const AVFilterPad xcam_outputs[] = {
> +    {
> +        .name         = "default",
> +        .type         = AVMEDIA_TYPE_VIDEO,
> +        .config_props = xcam_config_output
> +    },
> +    { NULL }
> +};
> +
> +AVFilter ff_vf_xcam = {
> +    .name          = "xcam",
> +    .description   = NULL_IF_CONFIG_SMALL("Apply image processing using
> libXCam"),
> +    .priv_size     = sizeof(XCAMContext),
> +    .priv_class    = &xcam_class,
> +    .init          = xcam_init,
> +    .query_formats = xcam_query_formats,
> +    .outputs       = xcam_outputs,
> +    .activate      = xcam_activate,
> +    .uninit        = xcam_uninit,
> +    .flags         = AVFILTER_FLAG_DYNAMIC_INPUTS
> +};
> --
> 2.17.1



More information about the ffmpeg-devel mailing list