[FFmpeg-devel] [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

Xiaowei Wang xiaoweiw at nvidia.com
Fri Aug 20 20:03:44 EEST 2021


On 2021/7/25 21:04, James Almer wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 7/25/2021 8:58 AM, Xiaowei Wang wrote:
>> The backend can be called as:
>> -vf dnn_processing=dnn_backend=tensorrt:model="model":input=:output=
>>
>> As TensorRT provides C++ API rather than C, the TensorRT 
>> implementation is
>> separated into a wrapper.
>> The wrapper is placed inhttps://github.com/DutchPiPi/nv-tensorrt-wrapper
>> Please build & install the wrapper before compiling ffmpeg.
>> Please seehttps://github.com/DutchPiPi/FFmpeg-trt-backend-test  for 
>> how to
>> configure ffmpeg and generate a TensorRT engine for tests.
>>
>> Signed-off-by: Xiaowei Wang<xiaoweiw at nvidia.com>
>> ---
>>   libavfilter/dnn/Makefile               |   2 +-
>>   libavfilter/dnn/dnn_backend_tensorrt.c |  97 +++-
>>   libavfilter/dnn/dnn_backend_tensorrt.h |  40 +-
>>   libavfilter/dnn/dnn_io_proc_trt.cu     |  55 --
>>   libavfilter/dnn/trt_class_wrapper.cpp  | 731 -------------------------
>>   libavfilter/dnn/trt_class_wrapper.h    |  49 --
>>   6 files changed, 109 insertions(+), 865 deletions(-)
>>   delete mode 100644 libavfilter/dnn/dnn_io_proc_trt.cu
>>   delete mode 100644 libavfilter/dnn/trt_class_wrapper.cpp
>>   delete mode 100644 libavfilter/dnn/trt_class_wrapper.h
>>
>> diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile
>> index f9ea7ca386..4661d3b2cb 100644
>> --- a/libavfilter/dnn/Makefile
>> +++ b/libavfilter/dnn/Makefile
>> @@ -16,6 +16,6 @@ OBJS-$(CONFIG_DNN)                           += 
>> dnn/dnn_backend_native_layer_mat
>>
>>   DNN-OBJS-$(CONFIG_LIBTENSORFLOW)             += dnn/dnn_backend_tf.o
>>   DNN-OBJS-$(CONFIG_LIBOPENVINO)               += 
>> dnn/dnn_backend_openvino.o
>> -DNN-OBJS-$(CONFIG_LIBTENSORRT)               += 
>> dnn/dnn_backend_tensorrt.o dnn/trt_class_wrapper.o 
>> dnn/dnn_io_proc_trt.ptx.o
>> +DNN-OBJS-$(CONFIG_LIBTENSORRT)               += 
>> dnn/dnn_backend_tensorrt.o
>>
>>   OBJS-$(CONFIG_DNN)                           += $(DNN-OBJS-yes)
>> diff --git a/libavfilter/dnn/dnn_backend_tensorrt.c 
>> b/libavfilter/dnn/dnn_backend_tensorrt.c
>> index b45b770a77..e50ebc6c99 100644
>> --- a/libavfilter/dnn/dnn_backend_tensorrt.c
>> +++ b/libavfilter/dnn/dnn_backend_tensorrt.c
>> @@ -25,45 +25,119 @@
>>    * DNN TensorRT backend implementation.
>>    */
>>
>> -#include "trt_class_wrapper.h"
>>   #include "dnn_backend_tensorrt.h"
>>
>> -#include "libavutil/mem.h"
>>   #include "libavformat/avio.h"
>> +#include "libavutil/mem.h"
>>   #include "libavutil/avassert.h"
>>   #include "libavutil/opt.h"
>>   #include "libavutil/avstring.h"
>> +#include "libavutil/buffer.h"
>> +#include "libavutil/pixfmt.h"
>> +#include "libavutil/pixdesc.h"
>> +
>>   #include "dnn_io_proc.h"
>>   #include "../internal.h"
>> -#include "libavutil/buffer.h"
>> +#include "trt_class_wrapper.h"
>> +
>> +#include <stdio.h>
>> +#include <dlfcn.h>
>> +#include <libavutil/log.h>
>>   #include <stdint.h>
>>
>>   #define OFFSET(x) offsetof(TRTContext, x)
>>   #define FLAGS AV_OPT_FLAG_FILTERING_PARAM
>>   static const AVOption dnn_tensorrt_options[] = {
>> -    { "device", "index of the GPU to run model", 
>> OFFSET(options.device), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, 
>> FLAGS },
>> +    { "device", "index of the GPU to run model", 
>> OFFSET(options.device),    AV_OPT_TYPE_INT,    { .i64 = 0 }, 0, 
>> INT_MAX, FLAGS },
>> +    { "plugin", "path to the plugin so",         
>> OFFSET(options.plugin_so), AV_OPT_TYPE_STRING, { .str = NULL}, 0, 
>> 0,     FLAGS },
>>       { NULL }
>>   };
>>   AVFILTER_DEFINE_CLASS(dnn_tensorrt);
>>
>> -DNNModel *ff_dnn_load_model_trt(const char 
>> *model_filename,DNNFunctionType func_type,
>> +static TRTWrapper *wrapper = NULL;
>> +
>> +static int load_trt_backend_lib(TRTWrapper *w, const char *so_path, 
>> int mode)
>> +{
>> +    w->so_handle = dlopen("libnvtensorrt.so", mode);
> 
> No, dlopen() is not allowed for this kind of thing. Linking must be
> added at build time.
> 
> You for that matter apparently add support for build time linking in
> patch 1, then attempt to remove it in this one, leaving cruft in the
> configure script. Why?
Not getting responses so re-sending.

As TensorRT only provides C++ APIs, the implementation of the backend 
inevitably contains cpp code, like patch 1. After patch 1 is finished, I 
heard that it would be better to avoid submitting cpp code so I put the 
cpp code inside a C wrapper (libnvtensorrt.so). I found that ffmpeg uses 
dlopen() to call CUDA and codec sdk, and I thought that dlopen() might 
be a preferable way so I used dlopen() as well.

If dlopen() is not allowed, I can keep the cpp code in the wrapper but 
link it at build time. I will also update the configure scrip and change 
the dependency to libnvtensorrt rather than libnvinfer. (libnvinfer is 
part of TensorRT and libnvtensorrt is the C wrapper of my cpp code.)
> 
>> +    if (!w->so_handle)
>> +    {
>> +        return AVERROR(EIO);
>> +    }
>> +
>> +    w->load_model_func = (tloadModelTrt*)dlsym(w->so_handle, 
>> "load_model_trt");
>> +    w->execute_model_func = (texecuteModelTrt*)dlsym(w->so_handle, 
>> "execute_model_trt");
>> +    w->free_model_func = (tfreeModelTrt*)dlsym(w->so_handle, 
>> "free_model_trt");
>> +    if (!w->load_model_func || !w->execute_model_func || 
>> !w->free_model_func)
>> +    {
>> +        return AVERROR(EIO);
>> +    }
>> +
>> +    return 0;
>> +}
> 
> _______________________________________________




More information about the ffmpeg-devel mailing list