[FFmpeg-soc] BFI

Mike Melanson mike at multimedia.cx
Sun Mar 23 20:36:50 CET 2008


Sisir Koppaka wrote:
> I have finished coding the BFI subsystem by now, problem is that it's not
> playing anything on runtime(It just hangs). I'm not sure where exactly it's
> hanging, I'm going to investigate that later today. I assume this is because
> of  my confusion with some of the functions etc., but the structure and most
> code is there. Once I become clear w.r.t. the unclear parts, I'll send over
> the patch.
> 
> 1) Right now, the BFI specs in the wiki are not clear about the header of
> the chunks. Though by using hexedit, it appears that the header is always
> IVAS, I haven't taken any risk about this and instead coded a version of the
> decompression algo within the demuxer to count the no. of pixels and once
> the reqd. no. of pixels is reached, it locates the next chunk and proceeds
> again.

Good idea. Indeed, it always seems to be IVAS.

> 2)  Also, in the decoder, when we send the uncompressed bytes to the output,
> do we have to take care about it's actual position in the frame where the
> byte will go, or will ffmpeg decide automatically that after the last pixel
> of the first line is completed, then go to the first pixel of the second
> line and so on?

I think you're talking about frame width vs. stride here. Width !=
stride. Width is the width of a decoded video frame. Stride is the width
of the video buffer that you are outputting to. This is the linesize[]
array in the AVFrame.

> 3) I'm setting my palette in the demuxer, but some of the other codecs I've
> seen are setting theirs in the decoder. The specs of some of these codecs
> that I've checked out say that the palette may be put anywhere and there may
> even be a palette for every frame. The BFI specs seem to imply that there's
> only one palette for the whole file, so I'm setting that in the demuxer
> instead of doing that in the decoder. Please clarify if this is the right
> way to proceed.

Transport the palette from the demuxer -> decoder and then set the
palette for the video frame in the decoder.

> 4) I'm in complete confusion about the bufferstream_get_buffer(and related),
> the memcpy(and related like memset), the reget_buffer(and related functions)
> mainly in their usage and the form of the parameters that must be passed(
> like dst or &dst where dst is already a pointer)

Not sure about this question. All I can recommend is following the model
laid out in other decoders. I would recommend libavcodec/smacker.c since
(I think) it is probably doing things as expected.

> Also, Mike said previously in this list :
> Before you pass the RGB components to FFmpeg, you will need to scale
> them to be 8-bit components.
> 
> But where and why do we do that? In the decoder, as of now, I'm just passing
> the decompressed bytes one by one(assuming each one stands for a pixel), I'm
> not sure how to interpret and implement Mike's statement in this context...

I need to do a blog post on this matter, with big pictures.

When you are done decompressing a video frame, you will have an array of
bytes. This will double as an array of 8-bit palette indices. Each index
references into the 256-entry palette table. Each palette table entry
has 3 elements: 6-bit R, 6-bit G, and 6-bit B. If you pass those 6-bit
components along, the final image will be too dark. That's why those
6-bit components needs to be scaled to 8-bit components.

-- 
	-Mike Melanson



More information about the FFmpeg-soc mailing list