[FFmpeg-devel] [PATCH v4 2/2] avfilter: Add tonemap vaapi filter

Xiang, Haihao haihao.xiang at intel.com
Wed Oct 23 07:48:28 EEST 2019


> On 11/09/2019 06:39, Zachary Zhou wrote:
> > It supports ICL platform.
> > H2H (HDR to HDR): P010 -> A2R10G10B10
> > H2S (HDR to SDR): P010 -> ARGB
> 
> The input format doesn't have any alpha so the output shouldn't either.  Not
> sure what the first case wants, but the second should be AV_PIX_FMT_0RGB (or
> some swizzle thereof).

Hi Mark,

Usually the default alpha is taken as 1.0 when alpha is not specified, so how
about setting alpha to 1.0 too when doing P010-> A2R10G10B10/ARGB in this
filter? On the other hand, we need to send the output of tonemapping to HDR/SDR
display and the HW supports A2R10G10B10/ARGB instead of 0R10G10B10/0RGB. 

Thanks
Haihao

> 
> > ---
> >  configure                      |   2 +
> >  doc/filters.texi               |  72 +++++
> >  libavfilter/Makefile           |   1 +
> >  libavfilter/allfilters.c       |   1 +
> >  libavfilter/vaapi_vpp.c        |   5 +
> >  libavfilter/vf_tonemap_vaapi.c | 575 +++++++++++++++++++++++++++++++++
> >  6 files changed, 656 insertions(+)
> >  create mode 100644 libavfilter/vf_tonemap_vaapi.c
> > 
> > diff --git a/configure b/configure
> > index 8413826f9e..c9bd4bfcd8 100755
> > --- a/configure
> > +++ b/configure
> > @@ -3551,6 +3551,7 @@ tinterlace_merge_test_deps="tinterlace_filter"
> >  tinterlace_pad_test_deps="tinterlace_filter"
> >  tonemap_filter_deps="const_nan"
> >  tonemap_opencl_filter_deps="opencl const_nan"
> > +tonemap_vaapi_filter_deps="vaapi
> > VAProcPipelineParameterBuffer_output_hdr_metadata"
> >  transpose_opencl_filter_deps="opencl"
> >  transpose_vaapi_filter_deps="vaapi VAProcPipelineCaps_rotation_flags"
> >  unsharp_opencl_filter_deps="opencl"
> > @@ -6544,6 +6545,7 @@ if enabled vaapi; then
> >  
> >      check_type "va/va.h va/va_dec_hevc.h" "VAPictureParameterBufferHEVC"
> >      check_struct "va/va.h" "VADecPictureParameterBufferVP9" bit_depth
> > +    check_struct "va/va.h va/va_vpp.h" "VAProcPipelineParameterBuffer"
> > output_hdr_metadata
> >      check_struct "va/va.h va/va_vpp.h" "VAProcPipelineCaps" rotation_flags
> >      check_type "va/va.h va/va_enc_hevc.h" "VAEncPictureParameterBufferHEVC"
> >      check_type "va/va.h va/va_enc_jpeg.h" "VAEncPictureParameterBufferJPEG"
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index 9d500e44a9..3a3e259f8d 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -20140,6 +20140,78 @@ Convert HDR(PQ/HLG) video to bt2020-transfer-
> > characteristic p010 format using li
> >  @end example
> >  @end itemize
> >  
> > + at section tonemap_vappi
> > +
> > +Perform HDR(High Dynamic Range) to HDR and HDR to SDR conversion with tone-
> > mapping.
> > +It maps the dynamic range of HDR10 content to the dynamic range of the
> > +display panel.
> 
> Does this support anything other than HDR10?  (E.g. HLG.)
> 
> > +
> > +It accepts the following parameters:
> > +
> > + at table @option
> > + at item type
> > +Specify the tone-mapping operator to be used.
> > +
> > +Possible values are:
> > + at table @var
> > + at item h2h
> > +Perform H2H(HDR to HDR), convert from p010 to r10g10b10a2
> > + at item h2s
> > +Perform H2S(HDR to SDR), convert from p010 to argb
> 
> Mentioning the actual formats here is probably confusing.  It's 10-bit YUV
> 4:2:0 to RGB in 10-bit or 8-bit (with no alpha channel in either case).
> 
> > + at end table
> > +
> > + at item display
> > +Set mastering display metadata for H2H
> 
> I think this is setting the properties of the output for an HDR
> display?  Please make this clear.
> 
> > +
> > +Can assume the following values:
> > + at table @var
> > + at item G
> > +Green primary G(x|y).
> > +The value for x and y shall be in the range of 0 to 50000 inclusive.
> 
> Make it a fraction rather than using the H.26[45] SEI fixed-point
> representation.
> 
> > + at item B
> > +Blue primary B(x|y).
> > +The value for x and y shall be in the range of 0 to 50000 inclusive.
> > + at item R
> > +Red primary R(x|y).
> > +The value for x and y shall be in the range of 0 to 50000 inclusive.
> > + at item WP
> > +White point WP(x|y).
> > +The value for x and y shall be in the range of 0 to 50000 inclusive.
> > + at item L
> > +Display mastering luminance L(min|max).
> > +The value is in units of 0.0001 candelas per square metre.
> 
> Candelas per square metre (accepting a fractional value) would be clearer.
> 
> > + at end table
> > +
> > + at item light
> > +Set content light level for H2H
> > +
> > +Can assume the following values:
> > + at table @var
> > + at item CLL
> > +Max content light level.
> > +The value is in units of 0.0001 candelas per square metre.
> > + at item FALL
> > +Max average light level per frame.
> > +The value is in units of 0.0001 candelas per square metre.
> 
> Also clearer as a fraction.
> 
> > + at end table
> > +
> > + at end table
> 
> More generally, I think we probably want to agree on a uniform way to express
> these values for filters.  A string representation of
> AVMasteringDisplayMetadata and AVContentLightMetadata could be used in a
> number of different places.
> 
> > +
> > + at subsection Example
> > +
> > + at itemize
> > + at item
> > +Convert HDR video to HDR video from p010 format to r10g10b10a format.
> > + at example
> > +-i INPUT -vf
> > "tonemap_vaapi=h2h:display=G(13250|34500)B(7500|3000)R(34000|16000)WP(15635|
> > 16450)L(2000|12000):light=CLL(10000)FALL(1000)" OUTPUT
> > + at end example
> > + at item
> > +Convert HDR video to SDR video from p010 format to argb format.
> > + at example
> > +-i INPUT -vf "tonemap_vaapi=h2s" OUTPUT
> > + at end example
> > + at end itemize
> > +
> >  @section unsharp_opencl
> >  
> >  Sharpen or blur the input video.
> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > index 3ef4191d9a..2d0151b182 100644
> > --- a/libavfilter/Makefile
> > +++ b/libavfilter/Makefile
> > @@ -401,6 +401,7 @@ OBJS-$(CONFIG_TMIX_FILTER)                   += vf_mix.o
> > framesync.o
> >  OBJS-$(CONFIG_TONEMAP_FILTER)                += vf_tonemap.o colorspace.o
> >  OBJS-$(CONFIG_TONEMAP_OPENCL_FILTER)         += vf_tonemap_opencl.o
> > colorspace.o opencl.o \
> >                                                  opencl/tonemap.o
> > opencl/colorspace_common.o
> > +OBJS-$(CONFIG_TONEMAP_VAAPI_FILTER)          += vf_tonemap_vaapi.o
> > vaapi_vpp.o
> >  OBJS-$(CONFIG_TPAD_FILTER)                   += vf_tpad.o
> >  OBJS-$(CONFIG_TRANSPOSE_FILTER)              += vf_transpose.o
> >  OBJS-$(CONFIG_TRANSPOSE_NPP_FILTER)          += vf_transpose_npp.o
> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> > index b675c688ee..f0da9ac16e 100644
> > --- a/libavfilter/allfilters.c
> > +++ b/libavfilter/allfilters.c
> > @@ -381,6 +381,7 @@ extern AVFilter ff_vf_tlut2;
> >  extern AVFilter ff_vf_tmix;
> >  extern AVFilter ff_vf_tonemap;
> >  extern AVFilter ff_vf_tonemap_opencl;
> > +extern AVFilter ff_vf_tonemap_vaapi;
> >  extern AVFilter ff_vf_tpad;
> >  extern AVFilter ff_vf_transpose;
> >  extern AVFilter ff_vf_transpose_npp;
> > diff --git a/libavfilter/vaapi_vpp.c b/libavfilter/vaapi_vpp.c
> > index b5b245c8af..5776243fa0 100644
> > --- a/libavfilter/vaapi_vpp.c
> > +++ b/libavfilter/vaapi_vpp.c
> > @@ -257,6 +257,11 @@ static const VAAPIColourProperties
> > vaapi_colour_standard_map[] = {
> >      { VAProcColorStandardSMPTE170M,   6,  6,  6 },
> >      { VAProcColorStandardSMPTE240M,   7,  7,  7 },
> >      { VAProcColorStandardGenericFilm, 8,  1,  1 },
> > +
> > +#if VA_CHECK_VERSION(2, 3, 0)
> > +    { VAProcColorStandardExplicit,    9,  16, AVCOL_SPC_BT2020_NCL},
> > +#endif
> 
> ?  If explicit values are provided then you don't need it in this map.
> 
> > +
> >  #if VA_CHECK_VERSION(1, 1, 0)
> >      { VAProcColorStandardSRGB,        1, 13,  0 },
> >      { VAProcColorStandardXVYCC601,    1, 11,  5 },
> > diff --git a/libavfilter/vf_tonemap_vaapi.c b/libavfilter/vf_tonemap_vaapi.c
> > new file mode 100644
> > index 0000000000..9b4ab4a365
> > --- /dev/null
> > +++ b/libavfilter/vf_tonemap_vaapi.c
> > @@ -0,0 +1,575 @@
> > +/*
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with FFmpeg; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> > USA
> > + */
> > +#include <string.h>
> > +
> > +#include "libavutil/avassert.h"
> > +#include "libavutil/mem.h"
> > +#include "libavutil/opt.h"
> > +#include "libavutil/pixdesc.h"
> > +#include "libavutil/mastering_display_metadata.h"
> > +
> > +#include "avfilter.h"
> > +#include "formats.h"
> > +#include "internal.h"
> > +#include "vaapi_vpp.h"
> > +
> > +// ITU-T H.265 Table E.3: Colour Primaries
> > +#define COLOUR_PRIMARY_BT2020            9
> > +#define COLOUR_PRIMARY_BT709             1
> > +// ITU-T H.265 Table E.4 Transfer characteristics
> > +#define TRANSFER_CHARACTERISTICS_BT709   1
> > +#define TRANSFER_CHARACTERISTICS_ST2084  16
> 
> These constants already exist in libavutil/pixfmt.h.
> 
> > +
> > +typedef enum {
> > +    HDR_VAAPI_H2H,
> > +    HDR_VAAPI_H2S,
> > +} HDRType;
> > +
> > +typedef struct HDRVAAPIContext {
> > +    VAAPIVPPContext vpp_ctx; // must be the first field
> > +
> > +    int hdr_type;
> > +
> > +    char *master_display;
> > +    char *content_light;
> > +
> > +    VAHdrMetaDataHDR10  in_metadata;
> > +    VAHdrMetaDataHDR10  out_metadata;
> > +
> > +    AVFrameSideData    *src_display;
> > +    AVFrameSideData    *src_light;
> > +} HDRVAAPIContext;
> > +
> > +static int tonemap_vaapi_save_metadata(AVFilterContext *avctx, AVFrame
> > *input_frame)
> > +{
> > +    HDRVAAPIContext *ctx = avctx->priv;
> > +    AVMasteringDisplayMetadata *hdr_meta;
> > +    AVContentLightMetadata *light_meta;
> > +
> > +    ctx->src_display = av_frame_get_side_data(input_frame,
> > +                                              AV_FRAME_DATA_MASTERING_DISPL
> > AY_METADATA);
> > +    if (ctx->src_display) {
> > +        hdr_meta = (AVMasteringDisplayMetadata *)ctx->src_display->data;
> > +        if (!hdr_meta) {
> > +            av_log(avctx, AV_LOG_ERROR, "No mastering display data\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        if (hdr_meta->has_luminance) {
> > +            const int luma_den = 10000;
> > +            ctx->in_metadata.max_display_mastering_luminance =
> > +                lrint(luma_den * av_q2d(hdr_meta->max_luminance));
> > +            ctx->in_metadata.min_display_mastering_luminance =
> > +                FFMIN(lrint(luma_den * av_q2d(hdr_meta->min_luminance)),
> > +                      ctx->in_metadata.max_display_mastering_luminance);
> > +
> > +            av_log(avctx, AV_LOG_DEBUG,
> > +                   "Mastering Display Metadata(in luminance):\n");
> > +            av_log(avctx, AV_LOG_DEBUG,
> > +                   "min_luminance=%u, max_luminance=%u\n",
> > +                   ctx->in_metadata.min_display_mastering_luminance,
> > +                   ctx->in_metadata.max_display_mastering_luminance);
> > +        }
> > +
> > +        if (hdr_meta->has_primaries) {
> > +            int i;
> > +            const int mapping[3] = {1, 2, 0};  //green, blue, red
> > +            const int chroma_den = 50000;
> > +
> > +            for (i = 0; i < 3; i++) {
> > +                const int j = mapping[i];
> > +                ctx->in_metadata.display_primaries_x[i] =
> > +                    FFMIN(lrint(chroma_den *
> > +                                av_q2d(hdr_meta->display_primaries[j][0])),
> > +                          chroma_den);
> > +                ctx->in_metadata.display_primaries_y[i] =
> > +                    FFMIN(lrint(chroma_den *
> > +                                av_q2d(hdr_meta->display_primaries[j][1])),
> > +                          chroma_den);
> > +            }
> > +
> > +            ctx->in_metadata.white_point_x =
> > +                FFMIN(lrint(chroma_den * av_q2d(hdr_meta->white_point[0])),
> > +                      chroma_den);
> > +            ctx->in_metadata.white_point_y =
> > +                FFMIN(lrint(chroma_den * av_q2d(hdr_meta->white_point[1])),
> > +                      chroma_den);
> > +
> > +            av_log(avctx, AV_LOG_DEBUG,
> > +                   "Mastering Display Metadata(in primaries):\n");
> > +            av_log(avctx, AV_LOG_DEBUG,
> > +                   "G(%u,%u) B(%u,%u) R(%u,%u) WP(%u,%u)\n",
> > +                   ctx->in_metadata.display_primaries_x[0],
> > +                   ctx->in_metadata.display_primaries_y[0],
> > +                   ctx->in_metadata.display_primaries_x[1],
> > +                   ctx->in_metadata.display_primaries_y[1],
> > +                   ctx->in_metadata.display_primaries_x[2],
> > +                   ctx->in_metadata.display_primaries_y[2],
> > +                   ctx->in_metadata.white_point_x,
> > +                   ctx->in_metadata.white_point_y);
> > +        }
> > +    } else {
> > +        av_log(avctx, AV_LOG_DEBUG, "No mastering display data from
> > input\n");
> > +    }
> > +
> > +    ctx->src_light = av_frame_get_side_data(input_frame,
> > +                                            AV_FRAME_DATA_CONTENT_LIGHT_LEV
> > EL);
> > +    if (ctx->src_light) {
> > +        light_meta = (AVContentLightMetadata *)ctx->src_light->data;
> > +        if (!light_meta) {
> > +            av_log(avctx, AV_LOG_ERROR, "No light meta data\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        ctx->in_metadata.max_content_light_level = light_meta->MaxCLL;
> > +        ctx->in_metadata.max_pic_average_light_level = light_meta->MaxFALL;
> 
> Not the same units?
> 
> > +
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "Mastering Content Light Level (in):\n");
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "MaxCLL(%u) MaxFALL(%u)\n",
> > +               ctx->in_metadata.max_content_light_level,
> > +               ctx->in_metadata.max_pic_average_light_level);
> > +    } else {
> > +        av_log(avctx, AV_LOG_DEBUG, "No content light level from input\n");
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int tonemap_vaapi_set_filter_params(AVFilterContext *avctx, AVFrame
> > *input_frame)
> > +{
> > +    VAAPIVPPContext *vpp_ctx   = avctx->priv;
> > +    HDRVAAPIContext *ctx       = avctx->priv;
> > +    VAStatus vas;
> > +    VAProcFilterParameterBufferHDRToneMapping *hdrtm_param;
> > +
> > +    vas = vaMapBuffer(vpp_ctx->hwctx->display, vpp_ctx->filter_buffers[0],
> > +                      (void**)&hdrtm_param);
> > +    if (vas != VA_STATUS_SUCCESS) {
> > +        av_log(avctx, AV_LOG_ERROR, "Failed to map "
> > +               "buffer (%d): %d (%s).\n",
> > +               vpp_ctx->filter_buffers[0], vas, vaErrorStr(vas));
> > +        return AVERROR(EIO);
> > +    }
> > +
> > +    memcpy(hdrtm_param->data.metadata, &ctx->in_metadata,
> > sizeof(VAHdrMetaDataHDR10));
> > +
> > +    vas = vaUnmapBuffer(vpp_ctx->hwctx->display, vpp_ctx-
> > >filter_buffers[0]);
> > +    if (vas != VA_STATUS_SUCCESS) {
> > +        av_log(avctx, AV_LOG_ERROR, "Failed to unmap output buffers: "
> > +               "%d (%s).\n", vas, vaErrorStr(vas));
> > +        return AVERROR(EIO);
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int tonemap_vaapi_build_filter_params(AVFilterContext *avctx)
> > +{
> > +    VAAPIVPPContext *vpp_ctx   = avctx->priv;
> > +    HDRVAAPIContext *ctx       = avctx->priv;
> > +    VAStatus vas;
> > +    VAProcFilterCapHighDynamicRange hdr_cap;
> > +    int num_query_caps;
> > +    VAProcFilterParameterBufferHDRToneMapping hdrtm_param;
> > +
> > +    vas = vaQueryVideoProcFilterCaps(vpp_ctx->hwctx->display,
> > +                                     vpp_ctx->va_context,
> > +                                     VAProcFilterHighDynamicRangeToneMappin
> > g,
> > +                                     &hdr_cap, &num_query_caps);
> 
> num_query_caps must be set to the size of your array on entry.
> 
> > +    if (vas != VA_STATUS_SUCCESS) {
> > +        av_log(avctx, AV_LOG_ERROR, "Failed to query HDR caps "
> > +               "context: %d (%s).\n", vas, vaErrorStr(vas));
> > +        return AVERROR(EIO);
> > +    }
> > +
> > +    if (hdr_cap.metadata_type == VAProcHighDynamicRangeMetadataNone) {
> > +        av_log(avctx, AV_LOG_ERROR, "VAAPI driver doesn't support HDR\n");
> > +        return AVERROR(EINVAL);
> > +    }
> 
> Filter caps are an array.  You need to iterate through the array checking each
> entry.
> 
> > +
> > +    switch (ctx->hdr_type) {
> > +    case HDR_VAAPI_H2H:
> > +        if (!(VA_TONE_MAPPING_HDR_TO_HDR & hdr_cap.caps_flag)) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "VAAPI driver doesn't support H2H\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +        break;
> > +    case HDR_VAAPI_H2S:
> > +        if (!(VA_TONE_MAPPING_HDR_TO_SDR & hdr_cap.caps_flag)) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "VAAPI driver doesn't support H2S\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +        break;
> > +    default:
> > +        av_assert0(0);
> > +    }
> > +
> > +    memset(&hdrtm_param, 0, sizeof(hdrtm_param));
> > +    memset(&ctx->in_metadata, 0, sizeof(ctx->in_metadata));
> > +    hdrtm_param.type = VAProcFilterHighDynamicRangeToneMapping;
> > +    hdrtm_param.data.metadata_type = VAProcHighDynamicRangeMetadataHDR10;
> > +    hdrtm_param.data.metadata      = &ctx->in_metadata;
> > +    hdrtm_param.data.metadata_size = sizeof(VAHdrMetaDataHDR10);
> > +
> > +    ff_vaapi_vpp_make_param_buffers(avctx,
> > +                                    VAProcFilterParameterBufferType,
> > +                                    &hdrtm_param, sizeof(hdrtm_param), 1);
> 
> Unchecked.
> 
> > +
> > +    return 0;
> > +}
> > +
> > +static int tonemap_vaapi_update_sidedata(AVFilterContext *avctx, AVFrame
> > *output_frame)
> > +{
> > +    HDRVAAPIContext *ctx = avctx->priv;
> > +    AVFrameSideData *metadata;
> > +    AVMasteringDisplayMetadata *hdr_meta;
> > +    AVFrameSideData *metadata_lt;
> > +    AVContentLightMetadata *hdr_meta_lt;
> > +
> > +    metadata = av_frame_get_side_data(output_frame,
> > +                                      AV_FRAME_DATA_MASTERING_DISPLAY_METAD
> > ATA);
> > +    if (metadata) {
> > +        int i;
> > +        const int mapping[3] = {1, 2, 0};  //green, blue, red
> > +        const int chroma_den = 50000;
> > +        const int luma_den   = 10000;
> > +
> > +        hdr_meta = (AVMasteringDisplayMetadata *)metadata->data;
> > +        if (!hdr_meta) {
> > +            av_log(avctx, AV_LOG_ERROR, "No mastering display data\n");
> > +            return AVERROR(EINVAL);
> > +        }
> 
> Side-data is refcounted - you can't just blindly write to it.
> 
> I think you want remove_side_data() followed by new_side_data()?
> 
> > +
> > +        for (i = 0; i < 3; i++) {
> > +            const int j = mapping[i];
> > +            hdr_meta->display_primaries[j][0].num = ctx-
> > >out_metadata.display_primaries_x[i];
> > +            hdr_meta->display_primaries[j][0].den = chroma_den;
> > +
> > +            hdr_meta->display_primaries[j][1].num = ctx-
> > >out_metadata.display_primaries_y[i];
> > +            hdr_meta->display_primaries[j][1].den = chroma_den;
> > +        }
> > +
> > +        hdr_meta->white_point[0].num = ctx->out_metadata.white_point_x;
> > +        hdr_meta->white_point[0].den = chroma_den;
> > +
> > +        hdr_meta->white_point[0].num = ctx->out_metadata.white_point_y;
> > +        hdr_meta->white_point[0].den = chroma_den;
> > +        hdr_meta->has_primaries = 1;
> > +
> > +        hdr_meta->max_luminance.num = ctx-
> > >out_metadata.max_display_mastering_luminance;
> > +        hdr_meta->max_luminance.den = luma_den;
> > +
> > +        hdr_meta->min_luminance.num = ctx-
> > >out_metadata.min_display_mastering_luminance;
> > +        hdr_meta->min_luminance.den = luma_den;
> > +        hdr_meta->has_luminance = 1;
> > +
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "Mastering Display Metadata(out luminance):\n");
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "min_luminance=%u, max_luminance=%u\n",
> > +               ctx->out_metadata.min_display_mastering_luminance,
> > +               ctx->out_metadata.max_display_mastering_luminance);
> > +
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "Mastering Display Metadata(out primaries):\n");
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "G(%u,%u) B(%u,%u) R(%u,%u) WP(%u,%u)\n",
> > +               ctx->out_metadata.display_primaries_x[0],
> > +               ctx->out_metadata.display_primaries_y[0],
> > +               ctx->out_metadata.display_primaries_x[1],
> > +               ctx->out_metadata.display_primaries_y[1],
> > +               ctx->out_metadata.display_primaries_x[2],
> > +               ctx->out_metadata.display_primaries_y[2],
> > +               ctx->out_metadata.white_point_x,
> > +               ctx->out_metadata.white_point_y);
> > +    } else {
> > +        av_log(avctx, AV_LOG_DEBUG, "No mastering display data for
> > output\n");
> > +    }
> > +
> > +    metadata_lt = av_frame_get_side_data(output_frame,
> > +                                         AV_FRAME_DATA_CONTENT_LIGHT_LEVEL)
> > ;
> > +    if (metadata_lt) {
> > +        hdr_meta_lt = (AVContentLightMetadata *)metadata_lt->data;
> > +        if (!hdr_meta_lt) {
> > +            av_log(avctx, AV_LOG_ERROR, "No light meta data\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        hdr_meta_lt->MaxCLL = FFMIN(ctx-
> > >out_metadata.max_content_light_level, 65535);
> > +        hdr_meta_lt->MaxFALL = FFMIN(ctx-
> > >out_metadata.max_pic_average_light_level, 65535);
> > +
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "Mastering Content Light Level (out):\n");
> > +        av_log(avctx, AV_LOG_DEBUG,
> > +               "MaxCLL(%u) MaxFALL(%u)\n",
> > +               ctx->out_metadata.max_content_light_level,
> > +               ctx->out_metadata.max_pic_average_light_level);
> > +
> > +    } else {
> > +        av_log(avctx, AV_LOG_DEBUG, "No content light level for output\n");
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int tonemap_vaapi_filter_frame(AVFilterLink *inlink, AVFrame
> > *input_frame)
> > +{
> > +    AVFilterContext *avctx     = inlink->dst;
> > +    AVFilterLink *outlink      = avctx->outputs[0];
> > +    VAAPIVPPContext *vpp_ctx   = avctx->priv;
> > +    HDRVAAPIContext *ctx       = avctx->priv;
> > +    AVFrame *output_frame      = NULL;
> > +    VASurfaceID input_surface, output_surface;
> > +    VARectangle input_region;
> > +
> > +    VAProcPipelineParameterBuffer params;
> > +    int err;
> > +
> > +    VAHdrMetaData              out_hdr_metadata;
> > +
> > +    av_log(avctx, AV_LOG_DEBUG, "Filter input: %s, %ux%u (%"PRId64").\n",
> > +           av_get_pix_fmt_name(input_frame->format),
> > +           input_frame->width, input_frame->height, input_frame->pts);
> > +
> > +    if (vpp_ctx->va_context == VA_INVALID_ID)
> > +        return AVERROR(EINVAL);
> > +
> > +    err = tonemap_vaapi_save_metadata(avctx, input_frame);
> > +    if (err < 0)
> > +        goto fail;
> > +
> > +    err = tonemap_vaapi_set_filter_params(avctx, input_frame);
> > +    if (err < 0)
> > +        goto fail;
> > +
> > +    input_surface = (VASurfaceID)(uintptr_t)input_frame->data[3];
> > +    av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for tonemap vpp
> > input.\n",
> > +           input_surface);
> > +
> > +    output_frame = ff_get_video_buffer(outlink, vpp_ctx->output_width,
> > +                                       vpp_ctx->output_height);
> > +    if (!output_frame) {
> > +        err = AVERROR(ENOMEM);
> > +        goto fail;
> > +    }
> > +
> > +    output_surface = (VASurfaceID)(uintptr_t)output_frame->data[3];
> > +    av_log(avctx, AV_LOG_DEBUG, "Using surface %#x for tonemap vpp
> > output.\n",
> > +           output_surface);
> > +    memset(&params, 0, sizeof(params));
> > +    input_region = (VARectangle) {
> > +        .x      = 0,
> > +        .y      = 0,
> > +        .width  = input_frame->width,
> > +        .height = input_frame->height,
> > +    };
> > +
> > +    params.filters     = &vpp_ctx->filter_buffers[0];
> > +    params.num_filters = vpp_ctx->nb_filter_buffers;
> > +
> > +    err = ff_vaapi_vpp_init_params(avctx, &params,
> > +                                   input_frame, output_frame);
> > +    if (err < 0)
> > +        goto fail;
> > +
> > +    switch (ctx->hdr_type)
> > +    {
> > +    case HDR_VAAPI_H2H:
> > +        params.output_color_standard = VAProcColorStandardExplicit;
> > +        params.output_color_properties.colour_primaries =
> > COLOUR_PRIMARY_BT2020;
> > +        params.output_color_properties.transfer_characteristics =
> > TRANSFER_CHARACTERISTICS_ST2084;
> > +        break;
> > +    case HDR_VAAPI_H2S:
> > +        params.output_color_standard = VAProcColorStandardBT709;
> > +        params.output_color_properties.colour_primaries =
> > COLOUR_PRIMARY_BT709;
> > +        params.output_color_properties.transfer_characteristics =
> > TRANSFER_CHARACTERISTICS_BT709;
> > +        break;
> > +    default:
> > +        av_assert0(0);
> > +    }
> 
> You want to get these by setting the colour properties on the output frame
> before calling vpp_init_params().  See vf_scale_vaapi.c for an example of
> setting the properties for colourspace conversion in the non-tonemapping case.
> 
> > +
> > +    if (ctx->hdr_type == HDR_VAAPI_H2H) {
> > +        memset(&out_hdr_metadata, 0, sizeof(out_hdr_metadata));
> > +        if (!ctx->master_display) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "Option mastering-display input invalid\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        if (10 != sscanf(ctx->master_display,
> > +                         "G(%hu|%hu)B(%hu|%hu)R(%hu|%hu)WP(%hu|%hu)L(%u|%u)
> > ",
> > +                         &ctx->out_metadata.display_primaries_x[0],
> > +                         &ctx->out_metadata.display_primaries_y[0],
> > +                         &ctx->out_metadata.display_primaries_x[1],
> > +                         &ctx->out_metadata.display_primaries_y[1],
> > +                         &ctx->out_metadata.display_primaries_x[2],
> > +                         &ctx->out_metadata.display_primaries_y[2],
> > +                         &ctx->out_metadata.white_point_x,
> > +                         &ctx->out_metadata.white_point_y,
> > +                         &ctx-
> > >out_metadata.min_display_mastering_luminance,
> > +                         &ctx-
> > >out_metadata.max_display_mastering_luminance)) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "Option mastering-display input invalid\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        if (!ctx->content_light) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "Option content-light input invalid\n");
> > +            return AVERROR(EINVAL);
> > +        }
> > +
> > +        if (2 != sscanf(ctx->content_light,
> > +                        "CLL(%hu)FALL(%hu)",
> > +                        &ctx->out_metadata.max_content_light_level,
> > +                        &ctx->out_metadata.max_pic_average_light_level)) {
> > +            av_log(avctx, AV_LOG_ERROR,
> > +                   "Option content-light input invalid\n");
> > +            return AVERROR(EINVAL);
> > +        }
> 
> The sscanf() operations should probably happen once at the beginning rather
> than for every frame.
> 
> > +
> > +        out_hdr_metadata.metadata_type =
> > VAProcHighDynamicRangeMetadataHDR10;
> > +        out_hdr_metadata.metadata      = &ctx->out_metadata;
> > +        out_hdr_metadata.metadata_size = sizeof(VAHdrMetaDataHDR10);
> > +
> > +        params.output_hdr_metadata = &out_hdr_metadata;
> > +    }
> > +
> > +    err = ff_vaapi_vpp_render_picture(avctx, &params, output_frame);
> > +    if (err < 0)
> > +        goto fail;
> > +
> > +    err = av_frame_copy_props(output_frame, input_frame);
> > +    if (err < 0)
> > +        goto fail;
> 
> This overwrites all of the colour properties set by vpp_init_params().  See
> other filters for the right ordering here.
> 
> > +
> > +    if (ctx->hdr_type == HDR_VAAPI_H2H) {
> > +        err = tonemap_vaapi_update_sidedata(avctx, output_frame);
> > +        if (err < 0)
> > +            goto fail;
> > +    }
> > +
> > +    av_frame_free(&input_frame);
> > +
> > +    av_log(avctx, AV_LOG_DEBUG, "Filter output: %s, %ux%u (%"PRId64").\n",
> > +           av_get_pix_fmt_name(output_frame->format),
> > +           output_frame->width, output_frame->height, output_frame->pts);
> > +
> > +    return ff_filter_frame(outlink, output_frame);
> > +
> > +fail:
> > +    av_frame_free(&input_frame);
> > +    av_frame_free(&output_frame);
> > +    return err;
> > +}
> > +
> > +static av_cold int tonemap_vaapi_init(AVFilterContext *avctx)
> > +{
> > +    VAAPIVPPContext *vpp_ctx = avctx->priv;
> > +    HDRVAAPIContext *ctx     = avctx->priv;
> > +
> > +    ff_vaapi_vpp_ctx_init(avctx);
> > +    vpp_ctx->build_filter_params = tonemap_vaapi_build_filter_params;
> > +    vpp_ctx->pipeline_uninit = ff_vaapi_vpp_pipeline_uninit;
> > +
> > +    if (ctx->hdr_type == HDR_VAAPI_H2H) {
> > +        vpp_ctx->output_format = AV_PIX_FMT_A2R10G10B10;
> > +    } else if (ctx->hdr_type == HDR_VAAPI_H2S) {
> > +        vpp_ctx->output_format = AV_PIX_FMT_ARGB;
> > +    } else {
> > +        av_assert0(0);
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int tonemap_vaapi_vpp_query_formats(AVFilterContext *avctx)
> > +{
> > +    int err;
> > +
> > +    enum AVPixelFormat pix_in_fmts[] = {
> > +        AV_PIX_FMT_P010,     //Input
> > +    };
> > +
> > +    enum AVPixelFormat pix_out_fmts[] = {
> > +        AV_PIX_FMT_A2R10G10B10,   //H2H RGB10
> > +        AV_PIX_FMT_ARGB,          //H2S RGB8
> > +    };
> > +
> > +    err = ff_formats_ref(ff_make_format_list(pix_in_fmts),
> > +                         &avctx->inputs[0]->out_formats);
> > +    if (err < 0)
> > +        return err;
> > +
> > +    err = ff_formats_ref(ff_make_format_list(pix_out_fmts),
> > +                         &avctx->outputs[0]->in_formats);
> > +    if (err < 0)
> > +        return err;
> 
> What is this trying to do?  Everything done here is overwritten by the
> following vpp_query_formats() as far as I can tell.
> 
> > +
> > +    return ff_vaapi_vpp_query_formats(avctx);
> > +}
> > +
> > +#define OFFSET(x) offsetof(HDRVAAPIContext, x)
> > +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_FILTERING_PARAM)
> > +static const AVOption tonemap_vaapi_options[] = {
> > +    { "type",    "hdr type",            OFFSET(hdr_type), AV_OPT_TYPE_INT,
> > { .i64 = HDR_VAAPI_H2H }, 0, 2, FLAGS, "type" },
> > +        { "h2h", "vaapi P010 to A2R10G10B10", 0, AV_OPT_TYPE_CONST,
> > {.i64=HDR_VAAPI_H2H}, INT_MIN, INT_MAX, FLAGS, "type" },
> > +        { "h2s", "vaapi P010 to ARGB",        0, AV_OPT_TYPE_CONST,
> > {.i64=HDR_VAAPI_H2S}, INT_MIN, INT_MAX, FLAGS, "type" },
> > +    { "display", "set master display",  OFFSET(master_display),
> > AV_OPT_TYPE_STRING, {.str=NULL}, CHAR_MIN, CHAR_MAX, FLAGS },
> > +    { "light",   "set content
> > light",   OFFSET(content_light),  AV_OPT_TYPE_STRING, {.str=NULL}, CHAR_MIN,
> > CHAR_MAX, FLAGS },
> 
> There should probably be an option to select the output colourspace properties
> as well (rather than hardcoding them above).
> 
> > +    { NULL }
> > +};
> > +
> > +
> > +AVFILTER_DEFINE_CLASS(tonemap_vaapi);
> > +
> > +static const AVFilterPad tonemap_vaapi_inputs[] = {
> > +    {
> > +        .name         = "default",
> > +        .type         = AVMEDIA_TYPE_VIDEO,
> > +        .filter_frame = &tonemap_vaapi_filter_frame,
> > +        .config_props = &ff_vaapi_vpp_config_input,
> > +    },
> > +    { NULL }
> > +};
> > +
> > +static const AVFilterPad tonemap_vaapi_outputs[] = {
> > +    {
> > +        .name = "default",
> > +        .type = AVMEDIA_TYPE_VIDEO,
> > +        .config_props = &ff_vaapi_vpp_config_output,
> > +    },
> > +    { NULL }
> > +};
> > +
> > +AVFilter ff_vf_tonemap_vaapi = {
> > +    .name           = "tonemap_vaapi",
> > +    .description    = NULL_IF_CONFIG_SMALL("VAAPI VPP for tonemap"),
> > +    .priv_size      = sizeof(HDRVAAPIContext),
> > +    .init           = &tonemap_vaapi_init,
> > +    .uninit         = &ff_vaapi_vpp_ctx_uninit,
> > +    .query_formats  = &tonemap_vaapi_vpp_query_formats,
> > +    .inputs         = tonemap_vaapi_inputs,
> > +    .outputs        = tonemap_vaapi_outputs,
> > +    .priv_class     = &tonemap_vaapi_class,
> > +    .flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE,
> > +};
> > 
> 
> - Mark
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request at ffmpeg.org with subject "unsubscribe".


More information about the ffmpeg-devel mailing list