[FFmpeg-devel] imagepipe filter (was [PATCH] avfilter: add dynoverlay filter.)

Paul B Mahol onemda at gmail.com
Wed Sep 28 00:20:14 EEST 2016

On 9/27/16, Priebe, Jason <jpriebe at cbcnewmedia.com> wrote:
> On 9/23/16, Paul B Mahol <onemda at gmail.com> wrote:
>> Named pipe approach would implement video source which would read images
>> from named pipe. It would read from named pipe until it decodes single
>> frame
>> and then would use that frame as input to next filter, for example
>> overlay filter.
>> If it encounters EOF in named pipe it would not abort but would instead
>> keep
>> sending last frame it got, for example completely transparent frame.
>> If it suddenly get more data from pipe it would update its internal
>> frame and output it as input to next filter in chain.
>> So command would look like this:
>> imagepipe=named_pipe:rate=30[o],[0:v][o]overlay=x=0:y=0 ...
>> And than in another terminal, you would use commands like this:
>> cat random_image.format > named_pipe
> Paul:  this is a really good idea (when you first mentioned pipes, I
> thought you meant to use pipes as a standard ffmpeg input, which doesn't
> really work in the way I'm trying to make it work here).  But a purpose-
> built filter that reads from a pipe is another story.
> I built an imagepipe filter that I'd like to submit as a patch, but
> I have some questions before I do that:
> - it outputs YUVA420P.  Does it need to output other pixel formats to
>   be accepted?

Not neccessary if adding other formats is easy.

> - it uses a slightly inelegant technique to read the images; it writes
>   the image data to a temp file so it can call ff_load_image().  I didn't
>   see a function that can load an image directly from an in-memory byte
> array.

AVFrame stores all decoded data from image. Using temp files is ridicculous.

> - I'm not 100% sure how to write the test.  I added a block=1 option to
>   the filter so that it will block on each frame to read in an image from
>   the pipe; this is designed for testing only (normally, you want
> non-blocking
>   reads).  But I don't know how to leverage FATE to build a test that
>   runs ffmpeg and also in another process, writes files to the pipe.  I
>   think I can do it if I add a new function to fate-run.sh, but I don't know
>   if that is discouraged.

Test can be added later.

> - Portability - I'm worried this is the big one.  mkfifo isn't readily
>   available on Windows without compatibility libraries, and even then,
>   I'm not sure whether they would work the same way they do under *nix.
>   Generally speaking, how does the ffmpeg team tackle cross-platform
>   issues like this?

IIRC named pipes are available for Windows.

More information about the ffmpeg-devel mailing list