[Libav-user] Mapping / converting QTCapture's to FFmpeg's pixel format

salsaman salsaman at gmail.com
Wed Feb 6 04:14:38 CET 2013


On Tue, Feb 5, 2013 at 10:58 PM, Brad O'Hearne
<brado at bighillsoftware.com> wrote:
>
> On Feb 5, 2013, at 2:47 AM, Alex Cohn <alexcohn at netvision.net.il> wrote:
>
> Unfortunately, most video codecs support only few (usually, one) pixel
> formats - both as input for encoding and output for decoding. When you
> use ffmpeg command line and specify an incompatible pixel format,
> ffmpeg will perform format conversion for you. But when you use the
> libavcodec library, you must take care of this conversion yourself.
>
> It works (well, fails miserably) as expected when you specify
> AV_PIX_FMT_NV12 (which, as far as I can tell, is the ffmpeg equivalent
> for kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) for a codec that
> does not support this format. You can use swscale to convert the
> images to AV_PIX_FMT_YUV420P, which is supported by vast majority of
> ffmpeg video encoders.
>
> It does puzzle me, though, that you experience crash when you
> configure your capture with kCVPixelFormatType_420YpCbCr8Planar. How
> do you pass it to avpicture_fill()? If I understand correctly,
> kCVPixelFormatType_420YpCbCr8Planar exposes three fields:
> ComponentInfoY, ComponentInfoCb, and ComponentInfoCr. But ffmpeg
> expects a contiguous byte array of Y pixels (w*h), followed by Cb
> (w/2*h/2).
>
> Therefore, even for conversion from
> kCVPixelFormatType_420YpCbCr8Planar to AV_PIX_FMT_YUV420P you need
> some extra step: create a byte array of size w*h*3/2 and copy the
> three planes to this array. If the source has line padding, you should
> take care of this, too.
>
>
> Thanks for the response. I am in the process of trying to decipher what kind
> of output formats I can actually get cleanly from QTKit on the cocoa-dev and
> QuickTime API lists. I am suspecting all was not actually well with the
> decompressed kCVPixelFormatType_420YpCbCr8Planar frame config -- the other
> pixel formats I believe were just wrong but my suspicion is if I can get the
> aforementioned pixel format outputting properly (meaning that QuickTime is
> performing any conversion) that it will work once sent to FFmpeg. There is
> no guarantee that is going to work, however.
>
> In the meantime, I decided to output the native device format descriptions
> of the two webcams I have, the first being my internal MacBook Pro camera,
> and the second being an external USB-webcam. Both of them are supporting the
> same output, as follows:
>
> Component Y'CbCr 8-bit 4:2:2 ordered Y'0 Cb Y'1 Cr, 160 × 120
>
> Reading in the some of the QuickTime documentation, it doesn't seem clear
> whether the camera will completely dictate the output format at the point
> where my program is called back and handed the decompressed frame; in other
> words, there is some verbiage to suggest that QuickTime may either output
> this format by default regardless of the camera, or compatibility with
> QuickTime may demand this format be available. For my question at the
> moment, that is irrelevant, as this is now a concrete point of reference
> from which I can work.
>
> So -- to the FFmpeg / video format gurus out there, can you give some
> guidance as to the approach to convert a frame in the format mentioned above
> to that of AV_PIX_FMT_YUV420P?
>
> Thanks!
>
> Cheers,
>
> Brad
>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user
>

Take a look here:

http://svn.code.sf.net/p/lives/code/trunk/src/colourspace.c

(search for convert_yuyv_to_yuv420_frame).

Salsaman.

(apologies for earlier top posting, its the default in gmail...)

http://lives.sourceforge.net
https://www.ohloh.net/accounts/salsaman


More information about the Libav-user mailing list