[FFmpeg-user] Another odd colourspace issue
krueger at lesspain.de
Wed Jul 11 11:30:26 CEST 2012
On Wed, Jul 11, 2012 at 10:37 AM, Tim Nicholson <nichot20 at yahoo.com> wrote:
> On 11/07/12 05:27, Robert Krüger wrote:
>> On Tue, Jul 10, 2012 at 7:44 PM, Phil Rhodes <phil_rhodes at rocketmail.com> wrote:
>>>> I believe that the colour space is generally decided solely on the frame
>>> I'm pretty sure you're right; at least, I've never been able to exactly
>>> figure it out. Coders?
>> I guess his assumption was concerning how FCP and Kdenlive do it. I
>> could not find a line of code in ffmpeg that sets a stream's color
>> space based on frame size but I might have missed something.
> Right on both counts, I couldn't find anything either but wondered if
> there was something in the container (mov) I had also missed. The real
> problem is lack of a readily accessible "waveform monitor" for files
> that can be relied upon. Both FCP and Kdenlive have them, but clearly
> have limitations.
> Since Kdenlive is built on libav* I was somewhat surprised that it
> didn't stick with ffmpegs defaults.
>> In general, there are only three options:
>> 1) the information is in the container for containers that support
>> this type of information (e.g. Mov or MXF)
>> 2) the information is in the stream for bitstream formats that support
>> this kind of information (e.g. MPEG-2 or H.264)
>> 3) the information is not in the file so AFAIK the application has no
>> choice but to guess
>> in case 3) frame size is probably the best heuristic you have if you
>> don't make the user specify it, which some applications like 5D2RGB
>>> It is of course cack.[...]
>> But back to the issue. AFAICS most things are basically in place as
>> far as API is concerned but may not be implemented for all
>> formats/codecs for which color information might be available, e.g.
>> for MPEG-2 and H.264 color information is read from the bitstream if
>> it is there. It is however not used when setting up libswscale for
>> scaling and pixel format conversions AFAICS, which might require some
>> API changes because I cannot see how the scale filter (where the
>> actual pixel format conversions happen) can currently access the color
>> information to set up swscale correctly but that should probably go to
>> the dev list after some more research.
> This pretty much what I had reckoned.
> Swscale seems to have the basic capability, but there seems to be no
> way, either from stream/container interrogation, or option setting, to
> invoke it. However clearly third party apps that use the libs (like
> Kdenlive) are making use of that capability.
yes, they most likely use
* @param inv_table the yuv2rgb coefficients, normally ff_yuv2rgb_coeffs[x]
* @return -1 if not supported
int sws_setColorspaceDetails(struct SwsContext *c, const int inv_table,
int srcRange, const int table, int dstRange,
int brightness, int contrast, int saturation);
before the actual scale is invoked. As Mark had noted, in ffmpeg, more
precisely the scale filter the coefficients are hardcoded.
> As a first stab it would be good to be able to specify it the way Mark
> suggests. I am not sufficiently familiar with ffmpeg internals to see
> what that would take, but am having a poke around, in between other
Yes, the thing is that IMHO it would have to be done in the scale
filter. One option would be to set it manually there. That should be
rather easy to add (just another filter parameter that is the used to
select the coefficients for the sws_setColorspaceDetails invocation).
However to be able to set this to sensible defaults from values
recovered by parsing container info or bitstreams (i.e. use the
already existing and in many cases populated fields
AVCodecContext.color_primaries|color_trc|colorspace) IIRC API would
have to be extended as I see no way to access that information from a
filter as neither AVFilterLink nor AVFilterBufferRefVideoProps contain
that information. Furthermore, ffmpeg (the program) needs the
information further down the chain so it can correctly write that
information into the bitstream or container format (by setting the
respective fields of AVCodecContext of the encoded stream) so the file
produced also contains that information and doesn't leave other
It would be nice to have that confirmed or corrected by one of the
developers as I am by no means an expert as far as ffmpeg development
is concerned. As I wrote earlier, this is probably the point where
this has to be taken to ffmpeg-devel with a specific request.
More information about the ffmpeg-user