[FFmpeg-user] CPU and GPU
Ted Park
kumowoon1025 at gmail.com
Mon Mar 2 06:01:24 EET 2020
Hi,
> So codec engineering companies like NGCodec, MainConcept, Beamr and MulticoreWare turn open source-based, ffmpeg workflows into FPGAs that, when mature firmware implementations, chip companies like Intel & NVIDIA turn into real hardware: masked GPUs. Do I have that right?
I think that is a bit of a stretch… It almost makes it sound like they copy paste a bunch of code from open source projects into a box that turns them into custom silicon designs. Besides, I think the way Intel or NVIDIA might use FPGAs (other than in their FPGA products) is probably very different from the way a software codec is accelerated using an FPGA.
> What struck me watching the ProRes transcodes was how the process used all 8 cores and (presumably) hyper-threading (representing as 16 cores being used)
Scalability is a bigger focus in some codecs than others, the tradeoff shows in other areas like compression.
> Is FFmpeg making the determination how to distribute across the cores, or is the machine (in this case MacOS) making those decisions? Does FFmpeg have any control over these things, and if so, is there any reason an FFmpeg user might want manual control -- say to either maximize processing time ("use 'em al"l) or manage computing resources ("use only cores 1, 2, and 3, but leave 4, 6, and 6 free for something else")?
Hardware resources like that literally at the level of physical CPU cores is managed at the lowest levels of an OS to more usable abstractions.
Maybe Media Encoder not taking up more than a certain amount of CPU time is an example, in many machines it runs alongside and shares resources with e.g. Premier considered higher priority.
> Was also curious: Does anyone know if Media Encoder is using FFmpeg under the hood?
I believe AME uses MainConcept.
Regards,
Ted Park
More information about the ffmpeg-user
mailing list