[FFmpeg-user] Does converting to yuv444p by default make sense?
jiachielee at live.com
Thu Aug 1 13:00:27 CEST 2013
At first, reading at the PDF file written by Poynton seems to lead us to
conclude that color space conversion alone, before chroma subsampling is
done, would reduce the number of possible colors, thus reducing color
information. If that is true, that’s what we called “lossy encoding”, i.e.
the encoding process where the original information is irrecoverable.
Hm… Seems like I’ve given out a lot of information without arranging them
Let me try to make it simpler.
I first tried to define what YCbCr really means and tell what “absolute
color space” and “relative color space” really means.
As I said before, YCbCr alone cannot define colors, that is without a
well-defined mapping to an absolute color space.
Then the question naturally leads to how does such mapping work? Does it
one-to-one or many to one?
That’s why I specifically choose a conversion matrix and try to derive its
inverse to show that its relationship is truly one-to-one, at least
However, mathematically one-to-one function does not guarantee the actual
encoding process is lossless.
As I said before, it’s very common for studios or producers to use lossy
encoding scheme; then the point of use YCbCr 4:4:4 would not have any effect
on preserving the color information. To put it simply, the bits for color
information have been reduced, of course there would be less color
For computer or software to read YCbCr signal correctly, it has to be
decoded to an absolute color space, where sRGB is the most common one,
co-developed by Microsoft and HP.
Due to the widespread popularity of sRGB, the color profiles or the color
space assumed to be used by lossless image files is always sRGB; if but
software is able to do conversion between absolute color space. But YCbCr is
not absolute color space, it’s just the question of rearranging the
information and doing the right mapping.
In practice, the encoding process is very complex, often with noise,
rounding errors, and other issues. That’s why codec design is important.
Speaking of lossless compression of video creation using H.264 High 4:4:4
Predictive Profile, how do you think a video is being created?
It’s created from individual (presumably lossless) images encoded in an
absolute color space (presumably sRGB). In fact, you can do that with ffmpeg
The simple command is something like this:
ffmpeg -r "24" -i "frames\f_%%1d.png" -vcodec "libx264" -crf "0"
Here the lossless h.264 video is created from individual frames, which are
PNG files with the filenames of “f_1.png, f_2.png…”
Of course, for this operation to work, the image files have to share the
same resolution and bit depth.
If color space conversion would somehow cause the loss of color information,
such an encoding process should not be called “lossless”.
Besides, mathematically, the affine map which I used to derive the inverse
affine map does not show any loss of information. Mathematically, when a
rounding occurs; or the function itself is not one-to-one; it’s generally
not possible find the inverse function; that’s what we call irreversible
conversion. That’s why trigonometric function or exponential function,
strictly speaking, does not have inverse functions. When they are defined in
terms of complex numbers, there are simply no ways to know what the original
input number is. One way for such “inverse” to work is to simply “choose”
the one of the many possibilities; again that option does not necessarily
equal what input is; that’s why we specify a range of possible values for
such “inverse” function, instead of the whole field of complex numbers.
Based on well-defined mapping, the color information in your videos, despite
being encoded in YCbCr, is defined by sRGB color space in your Windows PC.
Keep that in mind. Although re-encoding to YCbCr from sRGB is commonly
called “color space conversion” or “pixel format conversion”, it’s entirely
different from “absolute color space conversion”; where the colors
themselves are being defined differently. For example, in Adobe RGB, the set
of possible colors is different from what is being defined by sRGB; if a
“color space conversion” occurs between them, of course there would be
noticeably difference of what we would see in terms of “perceptual colors”
after the encoding is done. But YCbCr does not define “colors”. It’s up to
the codec implementation to see if their design and mapping works
flawlessly. It’s also very common to call sRGB “4:4:4”; that is for a
reason. If the mathematical operation being used is buggy and the codec
design is not good, there is no denying that re-encoding the information
itself would lead to information loss.
View this message in context: http://ffmpeg-users.933282.n4.nabble.com/Does-converting-to-yuv444p-by-default-make-sense-tp4660219p4660382.html
Sent from the FFmpeg-users mailing list archive at Nabble.com.
More information about the ffmpeg-user