[MPlayer-users] Can I create an "mplayer | filter | mencoder" pipeline?

noah at noah.org noah at noah.org
Fri Mar 16 19:45:09 CET 2007


Hello,

I'm writing a video stabilization filter. You input shaky video and it outputs
steady video. The filter is written in Python with some C extensions for speed.
So far the stabilization results have been very good. Now I need to scale up
my proof of concept to filter en entire 20 minute video.

For development what I did was use mplayer to output 100 frames to JPEG files.
My filter then processed the sequence of 100 JPEG images. Finally, I used
mencoder to convert the sequence back to video. This isn't very practical for
processing an entire video. Does anyone have any ideas for how to handle this
problem? My naive approach:

  1. script mplayer to output N frames starting from some offset
         mplayer -ss $offset -frames $N
  2. process N frames
  3. mencoder to append the N frames to a Motion JPEG video file
  4. increment offset by N frames
  5. repeat until no more frames
  6. convert Motion JPEG video to MPEG.

That seems clunky. I'm not even sure that will work correctly because mplayer
can only seek to key frames. I would have to make sure that my offset aligns
with the key frames in the source video. That could be a source of trouble. 
I think this approach is a bad idea, but it's the only way I know.

Is there a way to do this with a command-line pipe line?
I can modify the filter to use other still image formats besides JPEG.
For example, I would like to do something like this:

  $ mplayer shaky_video.avi | stabilo_filter.py | mencoder steady_video.avi

That of course omits the magic options that would make this work. If someone
can point me to some examples I would appreciate it. The mplayer man pages show
mencoder encoding from a pipe, so I am hopeful a filter pipeline is possible.
If this approach seems wrong then I'm happy to try another work flow.

Yours,
Noah





More information about the MPlayer-users mailing list