[FFmpeg-user] Unsync between Audio and Video and other Issues
JackDesBwa
jackdesbwa at desbwa.org
Sun Sep 22 23:03:32 EEST 2019
I am pretty sure being (actively or passively) aggressive will solve
nothing for nobody in this discussion (on both sides).
I am not a long-term user and still learn how to use ffmpeg, however I can
add a few information to help to solve the problem.
Your system cannot handle the throughput of your data as already stated.
Bear in mind that you have {3440(width)×1440(height)×24(bits ber
pixel)×30(fps) = 3.5Gbps and also 2(inputs)×2(channels)×48000(sample
rate)×16(bits per sample) = 3Mbps} of data to analyze in order to
understand the structure so that it can be compressed. You have a powerful
hardware, but video compression is a tremendously hard work that require a
lot of computation especially at those definitions.
As I can see, you are using the libx264 encoder which does not use hardware
acceleration.
I am surprised that your CPU is not fully loaded as this encoder does have
multi-thread support.
However, by itself it is not a magical word. Some problem do not
parallelize well, which mean that when you add cores it will not improve
that easy (and then additional cores might be without work). It is possible
that video compression (especially live one) do not scale well, because for
the little I know, it needs to know about the neighboring frames which is
often a barrier for parallel programming (another tremendously hard
discipline)
As it was already suggested, you can try to use an hardware accelerated
encoder in case it can help (https://trac.ffmpeg.org/wiki/HWAccelIntro). It
is not easy to make work at first, but it can have impressive acceleration
(although with your powerful CPU it might not be very extremely
impressive). Your CPU compresses slightly under realtime, maybe even a
small acceleration would suffice.
To use your nvidia board, perhaps you can start by trying to replace
"-vcodec libx264" by "-vcodec h264_nvenc". There might be some other
options to tweak after that.
You can also downscale as soon as possible (also proposed and you said it
would be possible) with a "scale=2580x1080" (
https://ffmpeg.org/ffmpeg-filters.html#scale-1) in your filter_complex for
example (see documentation & examples to see how to add it in the command
in addition to your amix filter. I would say that adding a second
-filter_complex argument would work, but not totally sure). This would
remove more than 40% volume of data to analyze and hopefully stay in your
target.
There might be other things to test, but these two are already good ways to
explore.
Also, you might try to use a software that is specialized in the recording
of the screen which can be likely optimized toward this specific task.
JackDesBwa
More information about the ffmpeg-user
mailing list