[FFmpeg-devel] [PATCH] Whisper audio filter
Michael Niedermayer
michael at niedermayer.cc
Tue Jul 15 00:47:13 EEST 2025
Hi Vittorio
On Mon, Jul 14, 2025 at 12:34:24PM +0200, Vittorio Palmisano wrote:
> Hi, I've added some changes to improve the VAD mechanism.
> You can find the changes here too:
> https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/17/files
>
>
> Signed-off-by: Vittorio Palmisano <vpalmisano at gmail.com>
> ---
> configure | 5 +
> doc/filters.texi | 106 +++++++++
> libavfilter/Makefile | 2 +
> libavfilter/af_whisper.c | 452 +++++++++++++++++++++++++++++++++++++++
> libavfilter/allfilters.c | 2 +
> 5 files changed, 567 insertions(+)
> create mode 100644 libavfilter/af_whisper.c
>
> diff --git a/configure b/configure
> index 6df8fa4deb..fe32bd542c 100755
> --- a/configure
> +++ b/configure
> @@ -337,6 +337,7 @@ External library support:
> --enable-vapoursynth enable VapourSynth demuxer [no]
> --disable-xlib disable xlib [autodetect]
> --disable-zlib disable zlib [autodetect]
> + --enable-whisper enable whisper filter [no]
> The following libraries provide various hardware acceleration features:
> --disable-amf disable AMF video encoding code [autodetect]
> @@ -2003,6 +2004,7 @@ EXTERNAL_LIBRARY_LIST="
> pocketsphinx
> vapoursynth
> vulkan_static
> + whisper
> "
> HWACCEL_AUTODETECT_LIBRARY_LIST="
> @@ -4059,6 +4061,7 @@ xstack_qsv_filter_deps="libmfx"
> xstack_qsv_filter_select="qsvvpp"
> pad_vaapi_filter_deps="vaapi_1"
> drawbox_vaapi_filter_deps="vaapi_1"
> +whisper_filter_deps="whisper"
> # examples
> avio_http_serve_files_deps="avformat avutil fork"
> @@ -7108,6 +7111,8 @@ enabled libvo_amrwbenc && require libvo_amrwbenc
> vo-amrwbenc/enc_if.h E_IF_in
> enabled libvorbis && require_pkg_config libvorbis vorbis
> vorbis/codec.h vorbis_info_init &&
> require_pkg_config libvorbisenc vorbisenc vorbis/vorbisenc.h vorbis_encode_init
> +enabled whisper && require_pkg_config whisper "whisper >= 1.7.5"
> whisper.h whisper_init_from_file_with_params
> +
> enabled libvpx && {
> enabled libvpx_vp8_decoder && {
> check_pkg_config libvpx_vp8_decoder "vpx >= 1.4.0" "vpx/vpx_decoder.h
> vpx/vp8dx.h" vpx_codec_vp8_dx ||
> diff --git a/doc/filters.texi b/doc/filters.texi
> index ed2956fe75..7cf7c9af51 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -7682,6 +7682,112 @@ There are 6 samples at -4 dB, 62 at -5 dB, 286
> at -6 dB, etc.
> In other words, raising the volume by +4 dB does not cause any clipping,
> raising it by +5 dB causes clipping for 6 samples, etc.
> + at anchor{whisper}
> + at section whisper
> +
> +It runs automatic speech recognition using the OpenAI's Whisper model.
> +
> +It requires the whisper.cpp library (https://github.com/ggml-org/whisper.cpp)
> +as a prerequisite. After installing the library it can be enabled using:
> + at code{./configure --enable-whisper}.
> +
> +The filter has following options:
> +
> + at table @option
> + at item model
> +The file path of the downloaded whisper.cpp model (mandatory).
> +
> + at item language
> +The language to use for transcription ('auto' for auto-detect).
> +Default value: @code{"auto"}
> +
> + at item queue
> +The maximum size that will be queued into the filter before
> processing the audio
> +with whisper. Using a small value the audio stream will be processed
> more often,
> +but the transcription quality will be lower and the required processing power
> +will be higher. Using a large value (e.g. 10-20s) will produce more accurate
> +results using less CPU (as using the whisper-cli tool), but the transcription
> +latency will be higher, thus not useful to process real-time streams.
> +Consider using the vad_model option associated with a large queue value.
> +Default value: @code{"3"}
> +
> + at item use_gpu
> +If the GPU support should be enabled.
> +Default value: @code{"true"}
> +
> + at item gpu_device
> +The GPU device to use.
> +Default value: @code{"0"}
is this always a number ?
if so the documenattion could say that
> +
> + at item destination
> +If set, the transcription output will be sent to the specified file or URL
> +(use one of the FFmpeg AVIO protocols); otherwise, the output will be logged as
> +info messages.
> +The output will also be set in the "lavfi.whisper.text" frame metadata.
teh documenattion should elaborate on what happens if the destination already
exists
[...]
> diff --git a/libavfilter/af_whisper.c b/libavfilter/af_whisper.c
> new file mode 100644
> index 0000000000..cdc6e1e839
> --- /dev/null
> +++ b/libavfilter/af_whisper.c
> @@ -0,0 +1,452 @@
> +/*
> + * Copyright (c) 2025 Vittorio Palmisano
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public License
> + * as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public License
> + * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +
> +#include <whisper.h>
> +
> +#include "libavutil/avutil.h"
> +#include "libavutil/opt.h"
> +#include "libavutil/channel_layout.h"
> +#include "libavutil/samplefmt.h"
> +#include "libavfilter/avfilter.h"
> +#include "libavfilter/audio.h"
> +#include "libavutil/mem.h"
> +#include "libavutil/avstring.h"
> +#include "libavutil/internal.h"
> +#include "libavformat/avio.h"
> +#include "libavutil/thread.h"
> +
> +#include "formats.h"
> +
> +typedef struct WhisperContext {
> + const AVClass *class;
> + char *model_path;
> + char *language;
> + bool use_gpu;
> + int gpu_device;
> + char *vad_model_path;
> + float vad_threshold;
> + int64_t vad_min_speech_duration;
> + int64_t vad_min_silence_duration;
> +
> + int64_t queue;
> + char *destination;
> + char *format;
> +
> + struct whisper_context *ctx_wsp;
> + struct whisper_vad_context *ctx_vad;
> + struct whisper_vad_params vad_params;
> +
> + float *audio_buffer;
> + int audio_buffer_queue_size;
> + int audio_buffer_fill_size;
> + int audio_buffer_vad_size;
> +
> + int eof;
> + int64_t next_pts;
> +
> + AVIOContext *avio_context;
> + int index;
> + int64_t timestamp;
> +} WhisperContext;
> +
> +static void cb_log(enum ggml_log_level level, const char *text, void
> *user_data)
> +{
> + AVFilterContext *ctx = (AVFilterContext *) user_data;
> + switch (level) {
> + case GGML_LOG_LEVEL_ERROR:
> + av_log(ctx, AV_LOG_ERROR, "%s", text);
> + break;
> + case GGML_LOG_LEVEL_WARN:
> + av_log(ctx, AV_LOG_WARNING, "%s", text);
> + break;
> + case GGML_LOG_LEVEL_INFO:
> + case GGML_LOG_LEVEL_DEBUG:
> + av_log(ctx, AV_LOG_DEBUG, "%s", text);
> + break;
> + }
> +}
static void cb_log(enum ggml_log_level level, const char *text, void *user_data)
{
AVFilterContext *ctx = user_data;
switch (level) {
case GGML_LOG_LEVEL_ERROR: level = AV_LOG_ERROR ; break;
case GGML_LOG_LEVEL_WARN : level = AV_LOG_WARNING; break;
// case GGML_LOG_LEVEL_INFO : level = AV_LOG_INFO; break;
default : level = AV_LOG_DEBUG ; break;
}
av_log(ctx, level, "%s", text);
}
[...]
> + const int n_segments = whisper_full_n_segments(wctx->ctx_wsp);
> + char *segments_text = NULL;
> +
> + for (int i = 0; i < n_segments; ++i) {
> + const bool turn =
> whisper_full_get_segment_speaker_turn_next(wctx->ctx_wsp, i);
> + const int64_t t0 = whisper_full_get_segment_t0(wctx->ctx_wsp, i) * 10;
> + const int64_t t1 = whisper_full_get_segment_t1(wctx->ctx_wsp, i) * 10;
> + const char *text = whisper_full_get_segment_text(wctx->ctx_wsp, i);
> + char *text_cleaned = av_strireplace(text + 1, "[BLANK_AUDIO]", "");
> +
> + if (av_strnlen(text_cleaned, 1) == 0) {
> + av_freep(&text_cleaned);
> + continue;
> + }
> + av_log(ctx, AV_LOG_INFO, " [%ld-%ld%s]: \"%s\"\n",
> + wctx->timestamp + t0, wctx->timestamp + t1, turn ? " (turn)" : "",
> text_cleaned);
> +
> + if (segments_text) {
> + char *new_text = av_asprintf("%s%s", segments_text, text_cleaned);
> + av_freep(&segments_text);
> + segments_text = new_text;
> + } else
> + segments_text = av_strdup(text_cleaned);
> +
> + if (wctx->avio_context) {
> + const int64_t start_t = wctx->timestamp + t0;
> + const int64_t end_t = wctx->timestamp + t1;
> + char *buf = NULL;
> +
> + if (!av_strcasecmp(wctx->format, "srt")) {
> + buf =
> + av_asprintf
> + ("%d\n%02ld:%02ld:%02ld.%03ld --> %02ld:%02ld:%02ld.%03ld\n%s\n\n",
> + wctx->index, start_t / 3600000,
> + (start_t / 60000) % 60, (start_t / 1000) % 60,
> + start_t % 1000, end_t / 3600000, (end_t / 60000) % 60,
> + (end_t / 1000) % 60, end_t % 1000, text_cleaned);
> + } else if (!av_strcasecmp(wctx->format, "json")) {
> + buf = av_asprintf("{\"start\":%ld,\"end\":%ld,\"text\":\"%s\"}\n",
> start_t, end_t, text_cleaned);
> + } else
> + buf = av_strdup(text_cleaned);
Do you think it would make sense to use avcodec_encode_subtitle() ?
It would avoid hardcoding these "writers" and could use any we support
also please make sure to attach the next attach in a way that doesnt corrupt it.
(i used the forgejo pr to test and read most of this but i think my reply
is not very readable as i replied to the mail)
thx
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
"You are 36 times more likely to die in a bathtub than at the hands of a
terrorist. Also, you are 2.5 times more likely to become a president and
2 times more likely to become an astronaut, than to die in a terrorist
attack." -- Thoughty2
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20250714/9deda714/attachment.sig>
More information about the ffmpeg-devel
mailing list