[FFmpeg-devel] Add mpeg user_data CEA-608/708 extraction.
Ronald S. Bultje
Sun Apr 11 21:49:20 CEST 2010
On Sun, Apr 11, 2010 at 3:17 PM, Daniel Kristjansson
<danielk at cuymedia.net> wrote:
> On Sun, 2010-04-11 at 14:20 -0400, Ronald S. Bultje wrote:
>> On Sun, Apr 11, 2010 at 2:15 PM, Daniel Kristjansson
>> <danielk at cuymedia.net> wrote:
>> > This patch pulls CEA-608 and 708 data from the user data and puts it
>> > in atsc_cc_buf buffer attached to each frame. The data passed in
>> > atsc_cc_buf is CEA-708-D caption packets for both data encoded using
>> > the ATSC user data and the SCTE-20 user data encodings. An example
>> > of how to decode this for display is present in MythTV, ccextract,
>> > etc.
>> Shouldn't that go into a separate captions (=text) AVStream?
> CEA-708/608 are not really text streams any more than MPEG-2 layer 2
> is a text stream. CEA-708 especially, it specifies windows, fonts,
> glyphs, colors, precise anchors and flow alignment anchor for the
> specific frame that it is on. It could be put in another specialized
> stream, but then the player would need additional layers of buffering
> to remux this stream with the video stream to remarry each packet
> with the video frame it belongs with so it can be parsed correctly.
But is it part of a generic video frame then? I mean, you appear to
put this "thing" in AVPicture, suggesting that it can be part of any
picture, be that Sorensen-3, MPEG-4, Theora or raw YUV video.
My suggestion, again, would be to put it in its own stream, so that
packets have timestamps and the stream has a mediatype ("CEA-08"),
which is a subtitle stream with fonts, glyphs, windows etc. The MPEG-2
demuxer would need to split it, and the MPEG-2 muxer would re-join it.
Your application wouldn't know any better than if it were a Matroska
stream with SSA subtitles.
More information about the ffmpeg-devel