[FFmpeg-devel] [PATCH 1/2] dnn: add openvino as one of dnn backend

Guo, Yejun yejun.guo at intel.com
Mon Jun 1 08:28:43 EEST 2020



> -----Original Message-----
> From: Pedro Arthur <bygrandao at gmail.com>
> Sent: 2020年6月1日 0:45
> To: FFmpeg development discussions and patches <ffmpeg-devel at ffmpeg.org>
> Cc: Guo, Yejun <yejun.guo at intel.com>
> Subject: Re: [FFmpeg-devel] [PATCH 1/2] dnn: add openvino as one of dnn
> backend
> 
> Hi,
> 
> 
> Em seg., 25 de mai. de 2020 às 22:56, Guo, Yejun <yejun.guo at intel.com>
> escreveu:
> >
> > OpenVINO is a Deep Learning Deployment Toolkit at
> > https://github.com/openvinotoolkit/openvino, it supports CPU, GPU and
> > heterogeneous plugins to accelerate deep learning inferencing.
> >
> > Please refer to
> > https://github.com/openvinotoolkit/openvino/blob/master/build-instruct
> > ion.md to build openvino (c library is built at the same time). Please
> > add option -DENABLE_MKL_DNN=ON for cmake to enable CPU path. The
> > header files and libraries are installed to
> > /usr/local/deployment_tools/inference_engine/
> > with default options on my system.
> >
> > To build FFmpeg with openvion, take my system as an example, run with:
> > $ ../ffmpeg/configure --enable-libopenvino
> > --extra-cflags=-I/usr/local/deployment_tools/inference_engine/include/
> > --extra-ldflags=-L/usr/local/deployment_tools/inference_engine/lib/int
> > el64
> > $ make
> >
> > As dnn module maintainer, I do want to see it is utilized by
> > customers, so the dnn module can be improved on the right direction
> > with developers/customers
> 
> I agree with you, yet it is not clear to me  what is the right direction.
> Currently we have the native and tensorflow backends, does OpenVINO brings
> something our current backends lacks?

Yes, We've already in the right direction. What I mean is for the optimization. We can
first know which workloads customers care most, and then profile the bottle necks.

I'll remove these unclear words (from this paragraph to the end) and add the points below in V2.

> 
> Reading the docs I see a few points (and questions) that may be pro openvino. If
> you can confirm them, I think it would be worth adding another backend.
> * It has a dedicated inference engine and it can optimize a model for inference,
> thus speeding it up

yes.

> * It can convert from various common models formats

yes, it can convert models from TensorFlow, Caffe, ONNX, MXNet and Kaldi.
For pyTorth format, it is usually first converted into ONNX then to OpenVINO format.

> * it supports CPU and GPU out of the box, TF also suports GPU but only cuda
> capable ones and it needes different installations of the library (one for cpu and
> another for gpu) Does openvino CPU backend runs well on non-intel cpus? I
> mean it does not need to be equally good but at least decent.
> Does gpu support runs on non-intel gpus? I think it is a really important point, it
> seems it is using opencl so if it can run on any opencl capable gpu  it would be a
> great upgrade over TF

Yes, they are supported. Anyway, I'll do some experiments to double confirm. 

> 
> 
> >
> > collaboration, but I seldomly receive feedbacks.
> >
> > On the other hand, I know that there are video analytics projects
> > accepted by customers based on FFmpeg + openvino, see more detail
> Being used is a good point but I think there must be some improvements over
> our current backends to justify it, otherwise one may ask why not adding any
> other dnn library from the huge list of 'yet another dnn library'.

agree, thanks.

> In short I think it is a good adition if you can confirm the above points.
> 
> > at https://github.com/VCDP/FFmpeg-patch, but the code bypasses the dnn
> > interface layer and could not be upstreamed directly.
> >
> > So, I introduce openvino as one of the dnn backend as a preparation
> > for later usage.
> >


More information about the ffmpeg-devel mailing list