[FFmpeg-devel] [PATCH] lavfi: psnr filter
Stefano Sabatini
stefasab at gmail.com
Mon Jul 8 15:39:14 CEST 2013
On date Saturday 2013-07-06 11:54:36 +0000, Paul B Mahol encoded:
> Signed-off-by: Paul B Mahol <onemda at gmail.com>
> ---
> doc/filters.texi | 73 +++++++++++
> libavfilter/Makefile | 1 +
> libavfilter/allfilters.c | 1 +
> libavfilter/vf_psnr.c | 310 +++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 385 insertions(+)
> create mode 100644 libavfilter/vf_psnr.c
>
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 2ac0c46..145ba8c 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -5837,6 +5837,79 @@ pp=hb|y/vb|a
> @end example
> @end itemize
>
> + at section psnr
> +
> +Obtain the average, maximum and minimum PSNR (Peak Signal to Noise
> +Ratio) between two input videos.
> +
> +This filter takes in input two input videos, the first input is
> +considered the "main" source and is passed unchanged to the
> +output. The second input is used as a "reference" video for computing
> +the PSNR.
> +
> +Both video inputs must have the same resolution and pixel format for
> +this filter to work correctly. Also it assumes that both input video
both inputs?
> +have the same number of frames, which are compared one by one.
> +
> +The obtained average PSNR is printed through the logging system.
Could be extended through metadata.
> +
> +The filter stores the accumulated MSE (mean squared error) of each
> +frame, and at the end of the processing it is averaged across all frames
> +equally, and the following formula is applied to obtain the PSNR:
> +
> + at example
> +PSNR = 10*log10(MAX^2/MSE)
> + at end example
> +
> +Where MAX is the average of the maximum values of each component of the
> +image.
> +
> +The filter accepts parameters as a list of @var{key}=@var{value} pairs,
> +separated by ":".
> +
> +The description of the accepted parameters follows.
> +
> + at table @option
> + at item stats_file, f
> +If specified the filter will use the named file to save the PSNR of
> +each individual frame.
> + at end table
> +
> +The file printed if @var{stats_file} is selected, contains a sequence of
> +key/value pairs of the form @var{key}:@var{value} for each compared
> +couple of frames.
> +
> +The shown line contains .
> +
> +A description of each shown parameter follows:
> +
> + at table @option
> + at item n
> +sequential number of the input frame, starting from 1
Any specific reason do we start from 1?
> + at item mse_average
> +Mean Square Error pixel-by-pixel average difference of the compared
> +frames, averaged over all the image components.
> +
> + at item mse_y, mse_u, mse_v, mse_r, mse_g, mse_g, mse_a
> +Mean Square Error pixel-by-pixel average difference of the compared
> +frames for the component specified by the suffix.
> +
> + at item psnr_y, psnr_u, psnr_v, psnr_r, psnr_g, psnr_g, psnr_a
> +Peak Signal to Noise ratio of the compared frames for the component
> +specified by the suffix.
> + at end table
> +
> +For example:
> + at example
> +movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
> +[main][ref] psnr="stats_file=stats.log" [out]
> + at end example
> +
> +On this example the input file being processed is compared with the
> +reference file @file{ref_movie.mpg}. The PSNR of each individual frame
> +is stored in @file{stats.log}.
> +
> @section removelogo
>
> Suppress a TV station logo, using an image file to determine which
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 66509c5..18dbc03 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -167,6 +167,7 @@ OBJS-$(CONFIG_PAD_FILTER) += vf_pad.o
> OBJS-$(CONFIG_PERMS_FILTER) += f_perms.o
> OBJS-$(CONFIG_PIXDESCTEST_FILTER) += vf_pixdesctest.o
> OBJS-$(CONFIG_PP_FILTER) += vf_pp.o
> +OBJS-$(CONFIG_PSNR_FILTER) += vf_psnr.o
> OBJS-$(CONFIG_REMOVELOGO_FILTER) += bbox.o lswsutils.o lavfutils.o vf_removelogo.o
> OBJS-$(CONFIG_ROTATE_FILTER) += vf_rotate.o
> OBJS-$(CONFIG_SEPARATEFIELDS_FILTER) += vf_separatefields.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 85a793f..9a11feb 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -162,6 +162,7 @@ void avfilter_register_all(void)
> REGISTER_FILTER(PERMS, perms, vf);
> REGISTER_FILTER(PIXDESCTEST, pixdesctest, vf);
> REGISTER_FILTER(PP, pp, vf);
> + REGISTER_FILTER(PSNR, psnr, vf);
> REGISTER_FILTER(REMOVELOGO, removelogo, vf);
> REGISTER_FILTER(ROTATE, rotate, vf);
> REGISTER_FILTER(SAB, sab, vf);
> diff --git a/libavfilter/vf_psnr.c b/libavfilter/vf_psnr.c
> new file mode 100644
> index 0000000..9b87a61
> --- /dev/null
> +++ b/libavfilter/vf_psnr.c
> @@ -0,0 +1,310 @@
> +/*
> + * Copyright (c) 2011 Roger Pau Monn? <roger.pau at entel.upc.edu>
Monn?
> + * Copyright (c) 2011 Stefano Sabatini
> + * Copyright (c) 2013 Paul B Mahol
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +/**
> + * @file
> + * Caculate the PSNR between two input videos.
> + */
> +
> +#include "libavutil/opt.h"
> +#include "libavutil/pixdesc.h"
> +#include "avfilter.h"
> +#include "dualinput.h"
> +#include "drawutils.h"
> +#include "formats.h"
> +#include "internal.h"
> +#include "video.h"
> +
> +typedef struct PSNRContext {
> + const AVClass *class;
> + FFDualInputContext dinput;
> + double mse, min_mse, max_mse;
> + int nb_frames;
> + FILE *stats_file;
> + char *stats_file_str;
> + int max[4], average_max;
> + int is_rgb;
> + uint8_t rgba_map[4];
> + char comps[4];
> + const AVPixFmtDescriptor *desc;
> +} PSNRContext;
> +
> +#define OFFSET(x) offsetof(PSNRContext, x)
> +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
> +
> +static const AVOption psnr_options[] = {
> + {"stats_file", "set file where to store per-frame difference information", OFFSET(stats_file_str), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
> + {"f", "set file where to store per-frame difference information", OFFSET(stats_file_str), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, FLAGS },
> + { NULL },
> +};
> +
> +AVFILTER_DEFINE_CLASS(psnr);
> +
> +static inline int pow2(int base)
> +{
> + return base*base;
> +}
> +
> +static inline double get_psnr(double mse, int nb_frames, int max)
> +{
> + return 10.0*log((pow2(max))/(mse/nb_frames))/log(10.0);
> +}
> +
> +static inline
> +void compute_images_mse(const uint8_t *main_data[4], const int main_linesizes[4],
> + const uint8_t *ref_data[4], const int ref_linesizes[4],
> + int w, int h, const AVPixFmtDescriptor *desc,
> + double mse[4])
> +{
> + int i, c, j;
> +
> + for (c = 0; c < desc->nb_components; c++) {
> + int hsub = c == 1 || c == 2 ? desc->log2_chroma_w : 0;
> + int vsub = c == 1 || c == 2 ? desc->log2_chroma_h : 0;
> + const int outw = FF_CEIL_RSHIFT(w, hsub);
> + const int outh = FF_CEIL_RSHIFT(h, vsub);
> + const uint8_t *main_line = main_data[c];
> + const uint8_t *ref_line = ref_data[c];
> + const int ref_linesize = ref_linesizes[c];
> + const int main_linesize = main_linesizes[c];
> + int m = 0;
> +
> + for (i = 0; i < outh; i++) {
> + for (j = 0; j < outw; j++)
> + m += pow2(main_line[j] - ref_line[j]);
> + ref_line += ref_linesize;
> + main_line += main_linesize;
> + }
> + mse[c] = m / (outw * outh);
> + }
> +}
> +
> +static AVFrame *do_psnr(AVFilterContext *ctx, AVFrame *main,
> + const AVFrame *ref)
> +{
> + PSNRContext *s = ctx->priv;
> + double comp_mse[4], mse = 0;
> + int j, c;
> +
> + compute_images_mse((const uint8_t **)main->data, main->linesize,
> + (const uint8_t **)ref->data, ref->linesize,
> + main->width, main->height, s->desc, comp_mse);
> +
> + for (j = 0; j < s->desc->nb_components; j++)
> + mse += comp_mse[j];
> + mse /= s->desc->nb_components;
> +
> + s->min_mse = FFMIN(s->min_mse, mse);
> + s->max_mse = FFMAX(s->max_mse, mse);
> +
> + s->mse += mse;
> + s->nb_frames++;
> +
> + if (s->stats_file) {
> + fprintf(s->stats_file, "n:%d mse_avg:%0.2f ", s->nb_frames, mse);
> + for (j = 0; j < s->desc->nb_components; j++) {
> + c = s->is_rgb ? s->rgba_map[j] : j;
> + fprintf(s->stats_file, "mse_%c:%0.2f ", s->comps[j], comp_mse[c]);
> + }
> + for (j = 0; j < s->desc->nb_components; j++) {
> + c = s->is_rgb ? s->rgba_map[j] : j;
> + fprintf(s->stats_file, "s%c:%0.2f ",
> + s->comps[j], get_psnr(comp_mse[c], 1, s->max[c]));
> + }
> + fprintf(s->stats_file, "\n");
> + }
> +
> + return main;
> +}
> +
> +static av_cold int init(AVFilterContext *ctx)
> +{
> + PSNRContext *s = ctx->priv;
> +
> + s->mse = 0;
> + s->nb_frames = 0;
> + s->min_mse = +INFINITY;
> + s->max_mse = -INFINITY;
> +
> + if (s->stats_file_str) {
> + s->stats_file = fopen(s->stats_file_str, "w");
> + if (!s->stats_file) {
> + av_log(ctx, AV_LOG_ERROR,
> + "Could not open stats file %s: %s\n",
> + s->stats_file_str, strerror(errno));
err = AVERROR(errno);
av_strerror()
return AVERROR(err);
> + return AVERROR(EINVAL);
> + }
> + }
> +
> + s->dinput.process = do_psnr;
> + return 0;
> +}
> +
> +static int config_input_ref(AVFilterLink *inlink)
> +{
> + AVFilterContext *ctx = inlink->dst;
> + PSNRContext *s = ctx->priv;
> + int j;
> +
> + s->desc = av_pix_fmt_desc_get(inlink->format);
> + if (ctx->inputs[0]->w != ctx->inputs[1]->w ||
> + ctx->inputs[0]->h != ctx->inputs[1]->h) {
> + av_log(ctx, AV_LOG_ERROR,
> + "Width and/or heigth of input videos are different, could not calculate PSNR\n");
> + return AVERROR(EINVAL);
> + }
> + if (ctx->inputs[0]->format != ctx->inputs[1]->format) {
> + av_log(ctx, AV_LOG_ERROR,
> + "Input filters have different pixel formats, could not calculate PSNR\n");
> + return AVERROR(EINVAL);
> + }
> +
> + switch (inlink->format) {
> + case AV_PIX_FMT_YUV410P:
> + case AV_PIX_FMT_YUV411P:
> + case AV_PIX_FMT_YUV420P:
> + case AV_PIX_FMT_YUV422P:
> + case AV_PIX_FMT_YUV440P:
> + case AV_PIX_FMT_YUV444P:
> + case AV_PIX_FMT_YUVA420P:
> + case AV_PIX_FMT_YUVA422P:
> + case AV_PIX_FMT_YUVA444P:
> + s->max[0] = 235;
> + s->max[3] = 255;
> + s->max[1] = s->max[2] = 240;
> + break;
> + default:
> + s->max[0] = s->max[1] = s->max[2] = s->max[3] = 255;
> + }
> +
> + s->is_rgb = ff_fill_rgba_map(s->rgba_map, inlink->format) >= 0;
> + s->comps[0] = s->is_rgb ? 'r' : 'y' ;
> + s->comps[1] = s->is_rgb ? 'g' : 'u' ;
> + s->comps[2] = s->is_rgb ? 'b' : 'v' ;
> + s->comps[3] = 'a';
> +
> + for (j = 0; j < s->desc->nb_components; j++)
> + s->average_max += s->max[j];
> + s->average_max /= s->desc->nb_components;
> +
> + return 0;
> +}
> +
> +static int query_formats(AVFilterContext *ctx)
> +{
> + static const enum PixelFormat pix_fmts[] = {
> + // AV_PIX_FMT_0RGB, AV_PIX_FMT_RGB0, AV_PIX_FMT_0BGR, AV_PIX_FMT_BGR0,
> + // AV_PIX_FMT_ARGB, AV_PIX_FMT_RGBA, AV_PIX_FMT_ABGR, AV_PIX_FMT_BGRA,
> + AV_PIX_FMT_GBRP, AV_PIX_FMT_GBRAP,
> + // AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24,
drop commented lines
> + AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV422P,
> + AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P,
> + AV_PIX_FMT_YUVJ444P, AV_PIX_FMT_YUVJ440P, AV_PIX_FMT_YUVJ422P,
> + AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_YUVJ411P,
> + AV_PIX_FMT_YUVA444P, AV_PIX_FMT_YUVA422P, AV_PIX_FMT_YUVA420P,
> + AV_PIX_FMT_GRAY8,
> + AV_PIX_FMT_NONE
> + };
> +
> + ff_set_common_formats(ctx, ff_make_format_list(pix_fmts));
> + return 0;
> +}
> +
> +static int filter_frame_main(AVFilterLink *inlink, AVFrame *inpicref)
> +{
> + PSNRContext *s = inlink->dst->priv;
> + return ff_dualinput_filter_frame_main(&s->dinput, inlink, inpicref);
> +}
> +
> +static int filter_frame_ref(AVFilterLink *inlink, AVFrame *inpicref)
> +{
> + PSNRContext *s = inlink->dst->priv;
> + return ff_dualinput_filter_frame_second(&s->dinput, inlink, inpicref);
> +}
You may add a warning in case the compared PTS don't match. Also you
may log the reference and image PTS in the log (lavu/timestamp.h).
> +
> +static int config_output(AVFilterLink *outlink)
> +{
> + AVFilterContext *ctx = outlink->src;
> +
> + outlink->w = ctx->inputs[0]->w;
> + outlink->h = ctx->inputs[0]->h;
> + outlink->time_base = ctx->inputs[0]->time_base;
Is this required?
Also what about aspect ratio?
[...]
Should be fine assuming it has been tested.
Some TODOs:
- use generic av_read_image_line() to compute values for the generic case
- extend it in order to support more than two inputs. In general it
may be worth to extend dual-input helpers, this way you avoid
complex filtergraphs with multiple splits in case you want to
compare several inputs.
--
FFmpeg = Fundamentalist & Furious Magical Pure Enchanting Gnome
More information about the ffmpeg-devel
mailing list