[FFmpeg-devel] [PATCH] libavcodec/cuviddec.c: increase CUVID_DEFAULT_NUM_SURFACES

Scott Theisen scott.the.elm at gmail.com
Thu Feb 20 22:37:24 EET 2025


The default value of CuvidContext::nb_surfaces was reduced from 25 to 5 (as
(CUVID_MAX_DISPLAY_DELAY + 1)) in 402d98c9d467dff6931d906ebb732b9a00334e0b.

In cuvid_is_buffer_full() delay can be 2 * CUVID_MAX_DISPLAY_DELAY with double
rate deinterlacing.  ctx->nb_surfaces is CUVID_DEFAULT_NUM_SURFACES =
(CUVID_MAX_DISPLAY_DELAY + 1) by default, in which case cuvid_is_buffer_full()
will always return true and cuvid_output_frame() will never read any data since
it will not call ff_decode_get_packet().
---

I think part of the problem might be that cuvid_is_buffer_full() does not know
how many frames are actually in the driver's queue and assumes it is the
maximum, even if none have yet been added.

This was preventing any frames from being decoded using NVDEC with MythTV for
some streams.  See https://github.com/MythTV/mythtv/issues/1039

---
 libavcodec/cuviddec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavcodec/cuviddec.c b/libavcodec/cuviddec.c
index 67076a1752..05dcafab6e 100644
--- a/libavcodec/cuviddec.c
+++ b/libavcodec/cuviddec.c
@@ -120,7 +120,7 @@ typedef struct CuvidParsedFrame
 #define CUVID_MAX_DISPLAY_DELAY (4)
 
 // Actual pool size will be determined by parser.
-#define CUVID_DEFAULT_NUM_SURFACES (CUVID_MAX_DISPLAY_DELAY + 1)
+#define CUVID_DEFAULT_NUM_SURFACES ((2 * CUVID_MAX_DISPLAY_DELAY) + 1)
 
 static int CUDAAPI cuvid_handle_video_sequence(void *opaque, CUVIDEOFORMAT* format)
 {
-- 
2.43.0



More information about the ffmpeg-devel mailing list