[FFmpeg-devel] [PATCH] Whisper audio filter
Michael Niedermayer
michael at niedermayer.cc
Sat Jul 12 03:03:30 EEST 2025
Hi Vittorio
On Fri, Jul 11, 2025 at 10:41:04AM +0200, Vittorio Palmisano wrote:
> > > +
> > > + memcpy(wctx->audio_buffer, wctx->audio_buffer + end_pos,
> > > + end_pos * sizeof(float));
> >
> > sizeof(*wctx->audio_buffer) is more robust than float
>
> But end_pos is not necessarily equal to the audio_buffer size, it
> could be lower.
you misunderstood
sizeof(*wctx->audio_buffer) == sizeof(float)
I was just sugesting to use the "type of the array" not to repeat
the type in the source
>
> >
> > not sure how others think of this, but i would ignore the 80 char limit and format this like:
> >
> > static const AVOption whisper_options[] = {
> > { "model" , "Path to the whisper.cpp model file" , OFFSET(model_path), AV_OPT_TYPE_STRING,.flags = FLAGS },
> > { "language", "Language for transcription ('auto' for auto-detect)", OFFSET(language) , AV_OPT_TYPE_STRING, {.str = "auto"}, .flags = FLAGS },
>
> I've used `indent -i4 -kr -nut` to format the code.
human formatted code looks better than what indent generates.
We are not litterally using indent to format code.
the docs also say "The presentation is one inspired by 'indent -i4 -kr -nut'."
A human will add a space here or a empty line there or align things to make
everything be neatly formatted and readable.
indent is not a human and not AI.
AI produces this: (i didnt verify this is still correct, but it should
show that its more readable)
static const AVOption whisper_options[] = {
{ "model", "Path to the whisper.cpp model file", OFFSET(model_path), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS },
{ "language", "Language for transcription ('auto' for auto-detect)", OFFSET(language), AV_OPT_TYPE_STRING, {.str = "auto"}, 0, 0, FLAGS },
{ "queue", "Audio queue size in milliseconds", OFFSET(queue), AV_OPT_TYPE_INT, {.i64 = 3000}, 20, INT_MAX, FLAGS },
{ "use_gpu", "Use GPU for processing", OFFSET(use_gpu), AV_OPT_TYPE_BOOL, {.i64 = 1}, 0, 1, FLAGS },
{ "gpu_device", "GPU device to use", OFFSET(gpu_device), AV_OPT_TYPE_INT, {.i64 = 0}, 0, INT_MAX, FLAGS },
{ "threads", "Number of threads to use", OFFSET(threads), AV_OPT_TYPE_INT, {.i64 = 4}, 0, INT_MAX, FLAGS },
{ "destination", "Output destination", OFFSET(destination), AV_OPT_TYPE_STRING, {.str = ""}, 0, 0, FLAGS },
{ "format", "Output format (text|srt|json)", OFFSET(format), AV_OPT_TYPE_STRING, {.str = "text"}, 0, 0, FLAGS },
{ "vad_model", "Path to the VAD model file", OFFSET(vad_model_path), AV_OPT_TYPE_STRING, {.str = NULL}, 0, 0, FLAGS },
{ "vad_threshold", "VAD threshold", OFFSET(vad_threshold), AV_OPT_TYPE_FLOAT, {.dbl = 0.5}, 0.0, 1.0, FLAGS },
{ "vad_min_speech_duration", "Minimum speech duration in milliseconds for VAD", OFFSET(vad_min_speech_duration),AV_OPT_TYPE_INT, {.i64 = 50}, 20, INT_MAX, FLAGS },
{ "vad_min_silence_duration","Minimum silence duration in milliseconds for VAD", OFFSET(vad_min_silence_duration),AV_OPT_TYPE_INT, {.i64 = 500}, 0, INT_MAX, FLAGS },
{ NULL }
};
>
> >
> > Also it seems, this is alot slower than whisper-cli
> >
> > time whisper-cli matrix.wav -m ~/whisper.cpp/models/ggml-base.en.bin --output-srt
> > real 0m16,283s
> > user 1m3,644s
> > sys 0m0,581s
> >
> >
> > time ./ffmpeg -v 99 -i matrix.wav -af "aformat=sample_rates=16000:channel_layouts=mono,whisper=model=/home/michael/whisper.cpp/models/ggml-base.en.bin:language=en:queue=3000:destination=output.srt:format=srt" -f null - 2> /tmp/log
> > real 1m30,827s
> > user 6m0,590s
> > sys 0m0,756s
> >
>
> Tested with: https://github.com/vpalmisano/webrtcperf/releases/download/videos-1.0/kt.mp4
> (and you need to increase the queue param to obtain a fair
> comparison):
This should be explained better in the documentation
it just says:
@item queue
The maximum size in milliseconds that will be queued into the filter before
processing the audio with whisper
Default value: @code{"3000"}
From reading that i have no idea that its value affects speed.
I might guess it affects latency.
Please make this a bit more elaborate so the user has enough information
so she can select a queue value.
ATM she just has a example value which seemed slow
thx
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Complexity theory is the science of finding the exact solution to an
approximation. Benchmarking OTOH is finding an approximation of the exact
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20250712/e8483e3a/attachment.sig>
More information about the ffmpeg-devel
mailing list