[FFmpeg-devel] SDR->HDR tone mapping algorithm?

Niklas Haas ffmpeg at haasn.xyz
Wed Feb 13 10:53:11 EET 2019


Hi Harish,

On Tue, 12 Feb 2019 19:43:33 +0530, Harish Krupo <harish.krupo.kps at intel.com> wrote:
> Hi Niklas,
> 
> Thanks a lot for your comments. Please find my reply inline.
> 
> Niklas Haas <ffmpeg at haasn.xyz> writes:
> 
> > Hi,
> >
> > The important thing to consider is what constraints we are trying to
> > solve. And I think the expected behavior is that an SDR signal in SDR
> > mode should look identical to an SDR signal in HDR mode, to the end
> > user.
> >
> > This is, of course, an impossible constraint to solve, since we don't
> > know anything about the display, either in HDR or in SDR mode. At best,
> > in the absence of this knowledge, we could make a guess (e.g. it's
> > roughly described by sRGB in SDR mode, and for HDR mode it roughly
> > follows the techniques outlined in ITU-R Report BT.2390). Better yet
> > would be to actually obtain this information from somewhere, but where?
> > (The user? ICC profile? EDID?).
> 
> Being the compositor we already have access to EDID, which means we can make
> intelligent decisions based on the capabilities of the display. Also, benefit
> of being the compositor is to have the complete knowledge of all the
> buffers to be displayed, thus we can make informed decisions about the optimal
> output for the display.

The problem remains that the EDID information is really lacking. It
doesn't give us specifics about what curve the display implements. The
only realistic way of getting that information is with an ICC profile.

Now, the average user will obviously not have an ICC profile for their
display, but the average user will also most likely not care about
colorimetric accuracy. So we can use an approximation based on EDID for
the average case and consult an ICC profile instead when one is
available.

> > But the bottom line is that to solve the "make SDR in HDR mode appear
> > identical to SDR in SDR mode" constraint, the curve you are trying to
> > invert is not your own tone mapping operator, but the tone mapping
> > operator implemented by the display (in HDR mode), which definitely
> > depends on what brightness level the display is targeting (in both SDR
> > and HDR modes).
> >
> 
> If I have to explain our implementation better, we decide on the target
> HDR metadata and eotf and use this for both tone mapping as well as
> settting output display configuration (like setting HDMI AVI infoframes),
> which means the display and the compositor are in-sync about the eotf curve.

There's no strict definition for how this HDR metadata is to be parsed,
so the display is still free to do whatever it wants. Actually, the
existence of HDR metadata further complicates matters, because a display
that does metadata-dependent tone mapping can't even easily be
characterized by an ICC profile (without making a separate profile for
every possible metadata value you want to send it).

Which means that, when using an ICC profile, we must force the display
into the exact set of metadata used when generating this ICC profile.

Most likely, rather than just having two modes "SDR" and "HDR", we
would end up having a list of possible display modes, each with
different associated metadata parameters (curve, peak, gamut, ...). If
one of these modes has an attached ICC profile, that ICC profile is only
valid for exactly that mode.

> > For an ideal HDR display, this would simply be the PQ curve's exact
> > definition (i.e. noop tone mapping). But in practice, the display will
> > almost surely not be capable of displaying up to 10,000 nits, so it will
> > implement a tone mapping operator of some kind (even if it's as simple
> > as clipping the extra range). Some colorimetric/reference displays
> > actually do the latter, since they prefer clipping out-of-range signals
> > over distorting in-range ones. But most consumer displays will probably
> > do something similar to the hable curve, most likely in per-channel
> > modes.
> >
> 
> I agree. This is something which we thought of but as these
> implementations are internal to the display, we anyways dont have any
> control over this.

Indeed. We should at least try and cover the most common cases, though.
For more specific use cases, we should support ICC profiles as discussed
above.

There is some sort of obvious distinction here between "ICC mode" and
"non-ICC mode". More specifically:

ICC mode:
- cannot set metadata dynamically, must set the exact values the ICC
  profile was generated for (but we can still dynamically pick the best
  mode based on the content)
- more complicated handling, probably needs at least 1D LUTs, worst case
  scenario is a 3D LUT

non-ICC mode:
- can set metadata dynamically based on the contents
- instead of an ICC profile, we need to implement our own EOTF functions
  based on the metadata values

Most likely, in terms of the UI, the user will be able to provide a set
of ICC profiles + metadata for the display, as well as choosing from a
default EOTF/gamut to assume in the absence of an ICC profile.

> > For an ideal SDR display, it depends on who you ask (w.r.t what "ideal"
> > means). In the ITU-R world, an ideal SDR reference display implements
> > the BT.1886 transfer function. In practice, it's probably closer to a
> > pure power gamma 2.2 curve. Or maybe sRGB. We really have nothing else
> > to do here except either consult an ICC profile or just stick our head
> > in the sand and guess randomly.
> >
> 
> In our implementation:
> - When we have a combination of HDR and SDR content to be displayed we
> apply proper degamma/eotf on each buffer, convert its colorspace to the
> target gamut, blend both the buffers, apply output inverse_eotf of
> the HDR content and then send it to display.
> - If there is only a SDR buffer(s) we do not touch it and send it to
> display, setting the right (SDR) avi infoframe.

Yes, that's the general approach. The question is more about what
"proper degamma/eotf" and "inverse_eotf" mean. If you assume "SDR EOTF"
is "whatever the display implements in SDR mode", and "HDR EOTF" is
"whatever the display implements in HDR mode", then we should try and
approximate these functions as best as we can.

The easy way out would be to just hard-code something like "sRGB" and
"PQ", but then stuff will look (noticeably) wrong in practice. And users
will notice the transition between SDR and HDR modes. Which is why I
think it's important to make this transition as smooth as possible.

> Do you think this is good enough?
> More details here [1].

> 
> > --------------------------------------------------------------------------
> >
> > I'd also like to comment on your compositor design proposal. A few notes:
> >
> > 1. It's always beneficial to do as few color conversion steps as
> >    possible, to minimize cumulative errors and optimize performance. If
> >    you use a 3DLUT as any step (e.g. for implementing an ICC-profile
> >    based mapping), the 3DLUT should be as "wide" as possible and cover
> >    as many operations as possible, so that the 3DLUT can be end-to-end
> >    optimized (by the CMM).
> >
> >    If you insist on doing compositing in linear light, then I would
> >    probably composite in display-referred linear light and convert it to
> >    non-linear light during scanout (either by implementing the needed
> >    OETF + linear tone mapping operator via the VCGTs, or by doing a
> >    non-linear tone mapping pass). But I would recommend trying to avoid
> >    any second gamut conversion step (e.g. from BT.2020 to the display's
> >    space after compositing).
> >
> >    Otherwise, I would composite directly in the target color space
> >    (saving us one final conversion step), which would obviously be
> >    preferable if there are no transparency effects to worry about.
> >    Maybe we could even switch dynamically between the two depending on
> >    whether any blending needs to occur? Assuming we can update the VCGTs
> >    atomically and without meaningful latency.
> >
> 
> We agree and thats why, while deciding the target color space in the
> compositor we consider the display's supported colorspaces. This means
> we will only apply one gamut conversion step per buffer which will be the
> target colorspace.

Seems reasonable. The second half of my comment was more about how to
possibly avoid needing the second inverse EOTF step.

> > 2. Rec 2020 is not (inherently) HDR. Also, the choice of color gamut has
> >    nothing to do with the choice of transfer function. I might have Rec
> >    709 HDR content. In general, when ingesting a buffer, the user should
> >    be responsible for tagging both its color primaries and its transfer
> >    function.
> >
> 
> We are adding few protocols to provide us exactly this information.
> 
> > 3. If you're compositing in linear light, then you most likely want to
> >    be using at least 16-bit per channel floating point buffers, with 1.0
> >    mapping to "SDR white", and HDR values being treated as above 1.0.
> >
> >    This is also a good color space to use for ingesting buffers, since
> >    it allows treating SDR and HDR inputs "identically", but extreme
> >    caution must be applied due to the fact that with floating point
> >    buffers, we're left at the mercy of what the client wants to put into
> >    them (10^20? NaN? Negative values?). Extra metadata must still be
> >    communicated between the client and the compositor to ensure both
> >    sides agree on the signal range of the floating point buffer
> >    contents.
> >
> > 4. Applications need a way to bypass the color pipeline in the
> >    compositor, i.e. applications need a way to tag their buffers as
> >    "this buffer is in display N's native (SDR|HDR) color space". This of
> >    course only makes sense if applications both have a way of knowing
> >    what display N's native SDR/HDR color space is, as well as which
> >    display N they're being displayed (more) on. Such buffers should be
> >    preserved as much as possible end-to-end, ideally being just directly
> >    scanned out as-is.
> >
> 
> The compositor has good enough information about the system state and
> considers a system wide view of all the buffers from all the
> applications and comes up with a target colorspace/HDR
> metadata. This means that the application need not bypass or even be
> concerned about the output colorspace. Applications should just send the
> buffers' original colorspace and metadata information and trust that the
> compositor will take care of the rest. :)

Yes, this works fine for most programs out there, but some applications
will either want or need to bypass the compositor regardless. This is
relevant in all contexts in which the application either needs to
guarantee exact monitor pixel values (e.g. for calibration or reference
purposes), or (more commonly) in the case where the application can do a
better job of tone mapping / gamut mapping than the compositor can, due
to access to additional information. (e.g. soft proofing for printers,
or dynamic tone mapping for movies)

Incidentally, even just considering the needs of a display calibration
program, that means that clients also need the (additional) ability to
force the display into a certain mode (i.e. by providing hard-coded
metadata).

> > 5. Implementing a "good" HDR-to-SDR tone mapping operator; and even the
> >    question of whether to use the display's HDR or SDR mode, requires
> >    knowledge of what brightness range your composited buffer contains.
> >    Crucially, I think applications should be allowed to tag their
> >    buffers with the brightest value that they "can" contain. If they
> >    fail to do so, we should assume the highest possible value permitted
> >    by the transfer function specified (e.g. 10,000 nits for PQ). Putting
> >    this metadata into the protocol early would allow us to explore
> >    better tone mapping functions later on.
> >
> > Some final words of advice,
> >
> > 1. The protocol suggestions for color management in Wayland have all
> >    seemed terribly over-engineered compared to the problem they are
> >    trying to solve. I have had some short discussions with Link Mauve on
> >    the topic of how to design a protocol that's as simple as possible
> >    while still fulfilling its purpose, and we started drafting our own
> >    protocol for this, but it's sitting in a WIP state somewhere.
> >
> > 2. I see that Graeme Gill has posted a bit in at least some of these
> >    threads. I recommend listening to his advice as much as possible.


More information about the ffmpeg-devel mailing list