[FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder

u-9iep at aetey.se u-9iep at aetey.se
Fri Feb 3 18:36:48 EET 2017

Hello Ronald,

On Fri, Feb 03, 2017 at 08:52:53AM -0500, Ronald S. Bultje wrote:
> > I thought about generating the bodies of the functions from something
> > like a template but it did not feel like this would make the code more
> > understandable aka maintainable. So I wonder if there is any point in
> > doing this, given the same binary result.
> wm4 has tried to make this point several times, it's about maintainability.

I really do understand.

> Let me explain, otherwise we'll keep going back and forth.

Thanks for the constructive discussion, hopefully I can make my point
clear here, and help you see the different perspective:

> Let's assume you have a function that does a, b and c, where b depends on
> the pix fmt. You're currently writing it as such:
> function_565() {
> a
> b_565
> c
> }
> function_24() {
> a
> b_24
> c
> }
> function_32() {
> a
> b_32
> c
> }

Rather (or even more complicated) :

function_565() {
a; x; b; y; c; z; d;

function_24() {
a; p; b; q; c; r; d;

function_32() {
a; i; b; j; c; k; d;

Now, a small change in any of "a", "b", "c" would not necessarily have
the same consequence for all the functions, so templating would have
made it _harder_ to make safe changes, if we'd use something like

TEMPLATE(){ a; X; b; Y; c; Z; d; }

according to your suggestion.

Do you follow me? The "common" code happens to be the same _now_ but may
have to be different due to a change in any of a,b,c,d,x,y,z,p,q,r,i,j,k;

It would be also harder to follow the code flow.

> It should be pretty obvious that a and c are triplicated in this example.

Only in this statical situation, which is the contrary to maintainability.

> Now compare it with this:


> It might look larger, but that's merely because we're not writing out a and

It is not the size but the readability and change safety.

> Conclusion: better to maintain, identical performance. Only advantages, no
> disadvantages. Should be easy to accept, right?

Hope you see from the above: this is exactly why the code is structured
the way it does.

> So, what I don't
> understand then, is why below you're claiming that get_format() doesn't do
> this. this is exactly what get_format() does. Why do you believe
> get_format() isn't capable of helping you accomplish this?

get_format() apparently returns to the caller a suitable format
from the supplied list. I read the documentation as carefully as
I can and my interpretation is that it is the application who
is to define the function and that it is the ffmpeg framework which
supplies the list of formats supported by the codec. I appreciate
if somebody corrects me and/or improves the documentation.

But actually this does not matter in the particular situation.
None of the parties (the decoder, the framework, the application)
has all the necessary knowledge to be able to make the optimal choice.
It is the human operator/administrator who may know. The choice depends
among others e.g. on how fast swscaler is on the particular hardware
(use it or not?), how much is the contents sensitive to color
depths and so on.
How can get_format() help with answering these questions??).

So get_format() is not a solution, mo matter how good or misleading
its documentation is.

> > I did my best to look for a better way but it does not seem to be existing.

> Look into private options, for one. But really, get_format() solves this. I

We must have been thinking about different problems? The problem of
choosing the optimal format to decode to is not solvable with get_format()
(unless get_format() asks the human or e.g. checks an environment
variable or uses another out-of-band channel).

> can elaborate more if you really want me to, as above, but I feel you
> haven't really looked into it. I know this feeling, sometimes you've got
> something that works for you and you just want to commit it and be done
> with it.

:) I know this too, but I am quite confident in having done my homework

You try to explain your point and educate "the newcomer", which is very
helpful and appreciated.

Hope you are also prepared to see cases where your assumptions
do not hold.

> But that's not going to happen. The env variable will never be committed.
> Never. I guarantee it with 100% certainty. If you'd like this general

Then pity for ffmpeg, rejecting a useful feature without having any
comparable functionality.

Why not reserve a namespace in envvars, like
This would be (of course) much more general and useful than sticking
with the name I picked. Then some other codec could easily and coherently
take advantage of the same method as well.

There are possibly other (not necessarily pixel format related) scenarios
where corresponding namespaced envvars would help as out-of-band channels.

> feature to be picked up in the main codebase, you'll need to change at the
> very, very least how it is exposed as a selectable option. Env opts are not
> acceptable, they have never been and never will. A private codec option
> (av_opt_*()) might be acceptable depending on how special/non-generic it
> is, but I'm still convinced that get_format() can help also.

Apparently, no one ever tried to regularly use the advantages of letting
the decoder convert pixel formats internally - because it hardly makes
sense outside of VQ codecs which are a minority.

Regrettably, as the result there is the lack of the corresponding framework
in ffmpeg and also the general ignorance about this strong optimization.

> I recommend that you play around with how get_format() is used in a few
> places, or you try implementing an application that supports the
> get_format() callback and use that to recommend a format to the decoder
> here, then pick that up in the decoder to select the actual pix_fmt, and
> use that for decoding. If you need help with that, we're obviously happy to
> help. But we know this code, we know it can do this, don't tell us it's not
> possible and an env variable is the only way. That's crazy.

You are thinking of a different problem than I do.
You know the code. I know the problem and have researched the solutions.

> Then make the recommended changes to prevent code duplication, and there's
> an actual chance the patch will be picked up in the main codebase. That
> sounds pretty straightforward, doesn't it?

Then I think the patch is ready for inclusion, modulo commenting out the
info messages. The environment variable support is off by default
and the maintainability of the highly similar code pieces is safe.

(no one complained about the lack of documentaton! amazing)

> (Aside, I agree with other reviewers that colorspace conversion doesn't
> belong in a decoder; I'd just remove yuv420p in the codebook conversion,
> the native format seems to be RGB, so yuv420p output makes no sense at all.

Again, you do not see the problem I am solving:

If a videobuffer is accepting yuv420p and the speed is critical, then
I want to be able to make it fast. (When you say "native" - what do
you mean? Internally Cinepak uses a yuv420-alike. For the videobuffer
"native" varies from one device to another.)

So how can I get the same speed if we'd remove yuv420p?

> It appears you're primarily concerned about selecting between rgb565, 24
> and 32 anyway. If you really need yuv, we need some way to import the
> actual coefficients from other places, they don't belong in the decoder.)

Such an import can easily be done later, if indeed this would offer an
improvement and no regression. Apparently there is no framework in place
to do such things, and in the end - _what_ would we gain?

Now I have made an honest effort to explain that I know what I am doing
and why it makes sense.

If the decision makers here do not want the contribution, it is not my fault.

Best regards and good luck,

More information about the ffmpeg-devel mailing list