[FFmpeg-user] How does FFmpeg + NVENC scale with multiple instances?
Sreenath BH
bhsreenath at gmail.com
Fri Mar 25 07:03:18 CET 2016
On 3/24/16, PSPunch <shima at pspunch.com> wrote:
> Hi,
>
> Trying to build a real-time transcoding farm based on FFmpeg + NVENC.
> I see many bench marks report on encoding frame rate, but not sure on
> how they scale when the number of encoding processes increase.
>
> I would appreciate advise from those with insight on the following.
>
>
> 1)
> As of today, what is the most affordable NVIDIA card that allows 3 or
> more instances of FFmpeg + NVENC to run?
>
>
> 2)
> In the case of the following application, roughly how many instances can
> I expect to run simultaneously?
>
> CPU: A single modern Core i7 (Rather than multi CPU)
> OS: Windows or Linux
> Input: MPEG2 or H.264, Mainly SD but occasionally HD
> Output: H.264, SD
> Other: De-interlace. Quality has priority over latency.
>
>
> 3)
> Any other bottlenecks/pitfalls I should be aware of?
>
>
> Thank you.
> --
> David Shimamoto
>
> _______________________________________________
> ffmpeg-user mailing list
> ffmpeg-user at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
Here are my observations after using the nvidia encoder for the past
two weeks, so this is a very limited experience.
1. You have to assign a gpu core by using "-gpu <number>' flag to ffmpeg.
So we need to figure out which gpu is free and use that.
2. I could be wrong, but once a gpu core is assigned, it will use only
that core and will not switch to other cores. If you have only one gpu
core, this is not an issue.
3. When you run two ffmpeg instances on same gpu core, the time taken
to transcode almost doubles.
We have a 4 core gpu and run four instances simultaneously.
The nvidia encoder, when used for transcoding, creates files with high
bitrates(and large file sizes) as compared with libx264 s/w codec. I
have not found ways to control bitrate without loss of picture
quality.
rgds,
Sreenath
More information about the ffmpeg-user
mailing list