[FFmpeg-devel] A few filter questions
Gerion Entrup
gerion.entrup at t-online.de
Thu Jul 17 12:33:41 CEST 2014
Good day,
I'm currently working on a video signature filter for ffmpeg. This allows you to
fingerprint videos.
This fingerprint is built up of 9mb/s of bits or 2-3 mb/s bits compressed.
In this context a few questions come into my mind:
- Should I print this whole bitstream to stdout/stderr at the end? Is it maybe
a better choice to made an own stream out of this. But which type of stream
this is?
(btw, the video signature algorithm needs 90 following frames, so I can
theoretically write every 90 frames something somewhere.)
- If I print the whole bitstream to stdout/stderr (my current implementation),
is there a possibility to use this later in an external program? The only
other globally analyze filter I found is volumedetect. This filter at the end
prints per print_stats the calculated results to the console. Is there a
possibility within the API for an external program to use these values or do I
have to grep the output?
A similar example is AcousticID (a fingerprinting technique for audio).
Currently chromaprint (the AcousticID library) provides an executable (fpcalc)
to calculate AcousticID. It therefore uses FFmpeg to decode the audio and then
its own library to calculate the fingerprint. The better way I think would be
to have an ffmpeg filter for this. But is it possibly to use the calculated
number in an external program without grepping the output?
Another thing that came into my mind: Can filter force other filters to go into
the filterchain? I see it, when I force GREY_8 only in my filter, it
automatically enables the scale filter, too. The reason I asked is the lookup
for my filter. Currently my filter analyzes a video and then produces a lot of
numbers. To compare two videos and decide, wheather they match or not, these
numbers has to be compared. I see three possibilities:
1. Write an VV->V filter. Reimplement (copy) the code from the V->V signature
filter and give a boolean as output (match or match not).
2. Take the V->V filter and write a python (or whatever) script that fetch the
output and calculates then the rest.
3. Write an VV->V filter, but enforce, that the normal signature filter is
executed first to both streams, use the result and then calculate the matching
type. Unfortunately I have no idea, how to do this and whether it is possible
at all. Can you give me an advice?
The last possibility also would allow something like twopass volume
normalisation. Currently there is a volumedetect and volume filter. To
normalize once could run volumedetect, then fetch the output, and put the
values into the volume filter, but I currently don't see a way to do this
automatically directly in ffmpeg.
(Once the filter is in a good state, I will try to bring it upstream.)
Best,
Gerion
More information about the ffmpeg-devel
mailing list