[FFmpeg-devel] request for feedback on video codec idea

Christophe Gisquet christophe.gisquet at gmail.com
Wed Oct 14 21:12:40 CEST 2015

2015-10-14 20:08 GMT+02:00 Roger Pack <rogerdpack2 at gmail.com>:
> I have become aware of some "fast" compression tools like LZO, LZ4,
> density, etc.  It seems like they all basically compress "the first
> 64KB then the next 64KB" or something like that [1].

It's generally the size of a window or dictionary, in which the
decoder can refer past strings.
There are of course far better dictionary-based methods, the details
coming to how much you can predict which strings are going to be
useful for a particular part of the image

Not exactly like blocks, though, as looking to another image is
already farther than a lot of the window sizes used.

> My idea is to basically put pixels of the same position, from multiple
> frames, "together" in a stream, then apply normal (fast) compression
> algorithms to the stream.  The hope being that if the pixels are the
> "same" between frames (presumed to be so because of not much dynamic
> content), the compression will be able to detect the similarity and
> compress it well.

That's Zip Motion Blocks Video, see zmbv. There's also an encoder.

Or equivalent. The fact that in screen content, images are pretty
static is rather well known. For natural content too, but with
somewhat different handling.

>  And also the egotistical desire to create the
> "fastest video codec in existence" in case the same were useful in
> other situations (i.e. use very little cpu--even huffyuv uses quite a
> bit of cpu) :)

It's also a matter of parallelism and quality of implementation.

Otherwise, huffyuv, lagarith, and a lot of lossless codecs (ffv1?) are
mostly for natural images (where difference from a pixel to another is
small, or smaller than screen content). There's a gsoc project to add
inter-frame compression to ffv1 (don't know the status).

> Also does anything similar to this already exist?
> (though should I
> create my new codec, it would be open source of course, which is
> already different than many [probably efficient] screen capture codecs
> out there).

Well, practically, I think it's a dead end. I don't know if screen
content encoders were updated to handle multithreading (using slices
of images like ffv1 & co), but that might be a start, as speed is a
concern to you.

Now, there's value for yourself in discovering what all this entails.


More information about the ffmpeg-devel mailing list