[FFmpeg-devel] [PATCH] Port FFT domain filter.
Michael Niedermayer
michaelni at gmx.at
Wed Feb 25 11:19:25 CET 2015
On Wed, Feb 25, 2015 at 11:55:51AM +0530, arwa arif wrote:
> On Tue, Feb 24, 2015 at 4:12 PM, Michael Niedermayer <michaelni at gmx.at>
> wrote:
>
> > On Tue, Feb 24, 2015 at 02:27:01PM +0530, arwa arif wrote:
> > > Hello,
> > >
> > > I have written a very primitive code for porting FFT domain filter. It
> > > accepts only gray8 format images. The output should be a grayscale image,
> > > but the ouput image is coming out to be a black and white image. Also, I
> > am
> > > getting confused when to do the vertical pass. After taking irdft of the
> > > horizontal pass or before it?
> >
> > you can do both rdft first and then the 2 irdft passes, this should
> > give more possibilities in filtering but it could be done the other
> > way around too
> >
> >
> > >
> > > I have attached the patch.
> >
> > > Makefile | 1
> > > allfilters.c | 1
> > > vf_fftfilt.c | 139
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > 3 files changed, 141 insertions(+)
> > > d4b25d6a204534a66400f52c1f5312652e8208af
> > 0001-Port-FFT-domain-filter.patch
> > > From 455a261d7e2b3afba767aac2e73448aeee02d159 Mon Sep 17 00:00:00 2001
> > > From: Arwa Arif <arwaarif1994 at gmail.com>
> > > Date: Tue, 24 Feb 2015 12:17:30 +0530
> > > Subject: [PATCH] Port FFT domain filter.
> > >
> > > ---
> > > libavfilter/Makefile | 1 +
> > > libavfilter/allfilters.c | 1 +
> > > libavfilter/vf_fftfilt.c | 139
> > ++++++++++++++++++++++++++++++++++++++++++++++
> > > 3 files changed, 141 insertions(+)
> > > create mode 100644 libavfilter/vf_fftfilt.c
> > >
> > > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > > index 289c63b..b184f07 100644
> > > --- a/libavfilter/Makefile
> > > +++ b/libavfilter/Makefile
> > > @@ -120,6 +120,7 @@ OBJS-$(CONFIG_EDGEDETECT_FILTER) +=
> > vf_edgedetect.o
> > > OBJS-$(CONFIG_EQ_FILTER) += vf_eq.o
> > > OBJS-$(CONFIG_EXTRACTPLANES_FILTER) += vf_extractplanes.o
> > > OBJS-$(CONFIG_FADE_FILTER) += vf_fade.o
> > > +OBJS-$(CONFIG_FFTFILT_FILTER) += vf_fftfilt.o
> > > OBJS-$(CONFIG_FIELD_FILTER) += vf_field.o
> > > OBJS-$(CONFIG_FIELDMATCH_FILTER) += vf_fieldmatch.o
> > > OBJS-$(CONFIG_FIELDORDER_FILTER) += vf_fieldorder.o
> > > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> > > index 55de154..043ac56 100644
> > > --- a/libavfilter/allfilters.c
> > > +++ b/libavfilter/allfilters.c
> > > @@ -136,6 +136,7 @@ void avfilter_register_all(void)
> > > REGISTER_FILTER(EQ, eq, vf);
> > > REGISTER_FILTER(EXTRACTPLANES, extractplanes, vf);
> > > REGISTER_FILTER(FADE, fade, vf);
> > > + REGISTER_FILTER(FFTFILT, fftfilt, vf);
> > > REGISTER_FILTER(FIELD, field, vf);
> > > REGISTER_FILTER(FIELDMATCH, fieldmatch, vf);
> > > REGISTER_FILTER(FIELDORDER, fieldorder, vf);
> > > diff --git a/libavfilter/vf_fftfilt.c b/libavfilter/vf_fftfilt.c
> > > new file mode 100644
> > > index 0000000..753bc8e
> > > --- /dev/null
> > > +++ b/libavfilter/vf_fftfilt.c
> > > @@ -0,0 +1,139 @@
> > > +/*
> > > + * Copyright (c) 2015 Arwa Arif <arwaarif1994 at gmail.com>
> > > + *
> > > + * This file is part of FFmpeg.
> > > + *
> > > + * FFmpeg is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * FFmpeg is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * You should have received a copy of the GNU General Public License
> > along
> > > + * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
> > > + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
> > > + */
> > > +
> > > +/**
> > > + * @file
> > > + * FFT domain filtering.
> > > + */
> > > +
> > > +#include "libavfilter/internal.h"
> > > +#include "libavutil/common.h"
> > > +#include "libavutil/imgutils.h"
> > > +#include "libavutil/opt.h"
> > > +#include "libavutil/pixdesc.h"
> > > +#include "libavcodec/avfft.h"
> > > +
> > > +typedef struct {
> > > + const AVClass *class;
> > > +
> > > + RDFTContext *rdft;
> > > + int rdft_bits;
> > > + FFTSample *rdft_data;
> > > +
> > > +} FFTFILTContext;
> > > +
> > > +static int filter_frame(AVFilterLink *inlink, AVFrame *in)
> > > +{
> > > + AVFilterContext *ctx = inlink->dst;
> > > + AVFilterLink *outlink = inlink->dst->outputs[0];
> > > + FFTFILTContext *fftfilt = ctx->priv;
> > > + AVFrame *out;
> > > + int i, j, k, rdft_bits;
> > > + size_t rdft_len, w, h;
> > > +
> > > + w = inlink->w;
> > > + h = inlink->h;
> > > +
> > > + /* RDFT window size (precision) according to the requested output
> > frame height */
> > > + for (rdft_bits = 1; 1 << rdft_bits < 2 * w; rdft_bits++);
> > > + rdft_len = 1 << rdft_bits;
> > > + fftfilt->rdft_data = av_malloc_array(h, rdft_len *
> > sizeof(FFTSample));
> > > + memset(fftfilt->rdft_data, 0, rdft_len * h * sizeof(FFTSample));
> > > +
> > > + out = ff_get_video_buffer(outlink, inlink->w, inlink->h);
> > > + if (!out)
> > > + return AVERROR(ENOMEM);
> > > +
> > > + av_frame_copy_props(out, in);
> > > +
> > > + /*Horizontal pass - RDFT*/
> > > + fftfilt->rdft = av_rdft_init(rdft_bits, DFT_R2C);
> >
> > > + k = 0;
> > > + for (i = 0; i < h; i++)
> > > + for (j = 0; j < w; j++)
> > > + {
> > > + fftfilt->rdft_data[k] = *(in->data[0] + in->linesize[0] * i
> > + j);
> > > + k++;
> > > + }
> >
> > the copied data must line up with the rdft applied to them
> > also its better to only clear the space between than the whole, as its
> > faster, so the memset above is not needed if the memset below is used
> > for (i = 0; i < h; i++) {
> > for (j = 0; j < w; j++)
> > fftfilt->rdft_data[i*rdft_len + j] = *(in->data[0] +
> > in->linesize[0] * i + j);
> > memset(fftfilt->rdft_data + i*rdft_len + j, 0,
> > (rdft_len-w)*sizeof(*fftfilt->rdft_data));
> > }
> >
> >
> > > +
> > > + for (i = 0; i < h; i++)
> > > + av_rdft_calc(fftfilt->rdft, fftfilt->rdft_data + i * rdft_len);
> > > +
> > > + av_rdft_end(fftfilt->rdft);
> > > +
> > > + /*Horizontal pass - IRDFT*/
> > > + fftfilt->rdft = av_rdft_init(rdft_bits, IDFT_C2R);
> > > +
> > > + for (i = 0; i < h; i++)
> > > + av_rdft_calc(fftfilt->rdft, fftfilt->rdft_data + i * rdft_len);
> > > +
> > > + k = 0;
> >
> > > + for (i = 0; i < h; i++)
> > > + for (j = 0; j < w; j++)
> > > + {
> > > + *(out->data[0] + out->linesize[0] * i + j) =
> > fftfilt->rdft_data[k];
> > > + k++;
> > > + }
> >
> > this needs cliping and scaling
> > for (i = 0; i < h; i++)
> > for (j = 0; j < w; j++)
> > *(out->data[0] + out->linesize[0] * i + j) =
> > av_clip(fftfilt->rdft_data[i*rdft_len + j] * 2 / rdft_len, 0, 255);
> >
> >
> >
> Thanks a lot. It is working for horizontal passes. I tried to introduce
> vertical pass. But, for that I am not getting the exact same image. Should
> the array sizes of vertical pass and horizontal passes be same?
the rdft we have and use works only with powers of 2, that is
n = 128, 256, 512, ...
thus the output of the horizontal transform for a 300x300 pixel
image would be a 512x300 pixel array and with the vertical transform
that would become 512x512, after the vertical inverse transform it
again would be 512x300.
So no, the array sizes differ unless one allocates one to be bigger
than needed
> Also, on
> what basis did you decide the scaling factor?
FFTs and similar transforms generally either use a scaling factor of
1 or sqrt(n). The exact value depends on the
implementation of the transform.
This value should be documented, when its not, its easy to find simply
by trial and error (a sqrt(n) factor for each fft, rdft, ifft, ...)
if that still looks wrong a *2 or /2 can be added until it looks
correct. Or one can print some value
to see by how much it changed from what it should be.
if the value of the first sample before all transforms is 100
and its 400 after a set of transforms and matching inverse transforms
than they introduce a scaling of 4, needing a /4 to compensate.
to calculate the scaling of a single pass fft one would compare the
dot product of the vectors before and after the fft but we do not
need to know that value here as there are matching inverse transforms
[...]
> +static int filter_frame(AVFilterLink *inlink, AVFrame *in)
> +{
> + AVFilterContext *ctx = inlink->dst;
> + AVFilterLink *outlink = inlink->dst->outputs[0];
> + FFTFILTContext *fftfilt = ctx->priv;
> + AVFrame *out;
> + int i, j, rdft_hbits, rdft_vbits;
> + size_t rdft_hlen, rdft_vlen, w, h;
> +
> + w = inlink->w;
> + h = inlink->h;
> +
> + /* RDFT - Array initialization for Horizontal pass*/
> + for (rdft_hbits = 1; 1 << rdft_hbits < 2 * w; rdft_hbits++);
the 2 is unneeded i think
> + rdft_hlen = 1 << rdft_hbits;
> + fftfilt->rdft_hdata = av_malloc_array(h, rdft_hlen * sizeof(FFTSample));
> +
> + /* RDFT - Array initialization for Vertical pass*/
> + for (rdft_vbits = 1; 1 << rdft_vbits < 2 * h; rdft_vbits++);
the 2 is unneeded i think
> + rdft_vlen = 1 << rdft_vbits;
> + fftfilt->rdft_vdata = av_malloc_array(w, rdft_vlen * sizeof(FFTSample));
as you already suspected, this needs to be bigger that is
rdft_hlen * rdft_vlen * sizeof(FFTSample)
> +
> + out = ff_get_video_buffer(outlink, inlink->w, inlink->h);
> + if (!out)
> + return AVERROR(ENOMEM);
> +
> + av_frame_copy_props(out, in);
> +
> + /*Horizontal pass - RDFT*/
> + fftfilt->rdft = av_rdft_init(rdft_hbits, DFT_R2C);
> + for (i = 0; i < h; i++)
> + {
> + memset(fftfilt->rdft_hdata + i * rdft_hlen, 0, rdft_hlen * sizeof(*fftfilt->rdft_hdata));
> + for (j = 0; j < w; j++)
> + fftfilt->rdft_hdata[i * rdft_hlen + j] = *(in->data[0] + in->linesize[0] * i + j);
> + }
> +
> + for (i = 0; i < h; i++)
> + av_rdft_calc(fftfilt->rdft, fftfilt->rdft_hdata + i * rdft_hlen);
> +
> + av_rdft_end(fftfilt->rdft);
> +
> + /*Vertical pass - RDFT*/
> + fftfilt->rdft = av_rdft_init(rdft_vbits, DFT_R2C);
> + for (i = 0; i < w; i++)
as the output of the horizontal pass is rdft_hlen and not w
this loop needs to process rdft_hlen values instead of w
> + {
> + memset(fftfilt->rdft_vdata + i * rdft_vlen, 0, rdft_vlen * sizeof(*fftfilt->rdft_vdata));
> + for (j = 0; j < h; j++)
> + fftfilt->rdft_vdata[i * rdft_vlen + j] = av_clip(fftfilt->rdft_hdata[j * rdft_hlen + i] * 2 / rdft_hlen, 0, 255);
cliping should only be done at the very end after all rdft and irdft
passes
scaling by sqrt(n/2) could be done after each pass but its faster and
simpler to scale by n*m/4 after all passes
> + }
> +
> + for (i = 0; i < w; i++)
rdft_hlen instead of w
> + av_rdft_calc(fftfilt->rdft, fftfilt->rdft_vdata + i * rdft_vlen);
> +
> + av_rdft_end(fftfilt->rdft);
> +
> + /*Vertical pass - IRDFT*/
> + fftfilt->rdft = av_rdft_init(rdft_vbits, IDFT_C2R);
> +
> + for (i = 0; i < w; i++)
rdft_hlen instead of w
> + av_rdft_calc(fftfilt->rdft, fftfilt->rdft_vdata + i * rdft_vlen);
> +
> + for (i = 0; i < w; i++)
rdft_hlen instead of w
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Awnsering whenever a program halts or runs forever is
On a turing machine, in general impossible (turings halting problem).
On any real computer, always possible as a real computer has a finite number
of states N, and will either halt in less than N cycles or never halt.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 181 bytes
Desc: Digital signature
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20150225/a514a118/attachment.asc>
More information about the ffmpeg-devel
mailing list