[Ffmpeg-devel-irc] ffmpeg.log.20190826

burek burek at teamnet.rs
Sun Sep 15 18:01:49 EEST 2019


[02:53:24 CEST] <Diag> Ok im frigdickin confused
[02:54:17 CEST] <Diag> if i do -vsync 0 and it gives me ~10.5 fps of encoding, then do no vsync and i get 30 fps of encoding, and its coming from a live source
[02:54:24 CEST] <Diag> what the happ is hellening
[03:22:42 CEST] <Diag> Is it true that according to google, only vertical resolution/40 threads can be used for encoding
[03:24:09 CEST] <DHE> depends on the codec. some of them split the frame into segments for encoding (sliced threading) in which case there is a limit to the slicing. some don't
[03:24:16 CEST] <Diag> h264, sorry
[03:24:24 CEST] <Diag> should have specified
[03:24:51 CEST] <DHE> I believe vertical resolution/16 is the limit, but x264 has frame-based threading which is generally better (and the default)
[03:24:51 CEST] <pink_mist> there's more than one option for encoding h264 in ffmpeg, it may depend on the specific encoder you're using
[03:25:30 CEST] <Diag> theres more than one h264 cpu encoder?
[03:25:44 CEST] <DHE> CPU based, I don't think so
[03:25:46 CEST] <Diag> im using libx264
[03:26:19 CEST] <Diag> So is it invalid of me to specify -threads and just let it do what it wants to do
[03:26:39 CEST] <Diag> I mean doing -threads 44 let me encode with veryslow instead of slow/slower
[03:27:05 CEST] <DHE> wow are you using an e5-2699 for a single livestream?
[03:27:38 CEST] <DHE> oh wait you had the funky 2696
[03:27:39 CEST] <Diag> 2696, but yeah
[03:27:42 CEST] <Diag> im just recording
[03:27:43 CEST] <Diag> :V
[03:27:57 CEST] <Diag> Im the retard that uses sharex to snag clips of things
[03:28:09 CEST] <furq> Diag: there's openh264 as well
[03:28:12 CEST] <Diag> oh?
[03:28:19 CEST] <DHE> don't use that. it's baseline only iirc
[03:28:20 CEST] <furq> but that's baseline only afaik
[03:28:20 CEST] <Diag> Is that worth taking a peek at?
[03:28:22 CEST] <Diag> oh lol
[03:28:22 CEST] <furq> it was until very recently
[03:28:30 CEST] <DHE> something about licensing
[03:28:31 CEST] <furq> it's cisco's thing for webrtc
[03:28:34 CEST] <Diag> kek
[03:28:40 CEST] <furq> they have some kind of patent indemnification clause in there
[03:29:01 CEST] <Diag> Im lost as to if its the screen capture doohickey thing that sharex uses or ffmpeg being funky
[03:29:17 CEST] <Diag> If i set vsync modes i get totally bizarre results
[03:29:24 CEST] <furq> honestly for screen capture on windows just use obs
[03:29:34 CEST] <Diag> Yeah but i like being able to draw a rectangle and record that
[03:29:45 CEST] <Diag> because usually i only record things for 5-10 seconds
[03:29:57 CEST] <Diag> Else i just use relive :shrug:
[03:30:15 CEST] <Diag> Im going to pull up the capturer as a source and see if thats framerate is going funky or what
[03:30:46 CEST] <Diag> Actually, im gonna ask, but i assume that the gdi grab in ffmpeg is so slow its not worth looking at?
[03:31:19 CEST] <furq> depends
[03:31:33 CEST] <furq> last time i tried it for a game it was not fast enough
[03:31:46 CEST] <Diag> I 'never' do anything larger than probably 1280x1024 and mostly everything is just a window on the desktop
[03:31:58 CEST] <furq> this was conveniently a 1280x960 window
[03:32:05 CEST] <Diag> oh well imagine that
[03:32:36 CEST] <furq> i assume it's not a game though because then you'd just use something with ogl/dx hooks for capturing
[03:32:40 CEST] <Diag> Correct
[03:32:54 CEST] <Diag> I just happen to be capturing an emulator in my test sample
[03:33:07 CEST] <Diag> because it has multiple types of motion, and a static background
[03:33:13 CEST] <Diag> basically the perfect test vid XD
[03:33:45 CEST] <Diag> Framerate of that is 60, and its locked to the desktops refresh rate
[03:34:29 CEST] <Diag> Ima give gdigrab a shot here and just see what that yields. The only issue is that i cant get around the beep beep -tune zerolatency in sharex really then
[03:39:08 CEST] <Diag> furq: complains about a thread message queue, but it seems to report the proper framerate, ill test it real quick http://tyronesbeefarm.com/images/2019d1eb459d-c503-441f-90ed-692ae5de3a64.png
[03:40:21 CEST] <Diag> :shrug: here goes
[03:41:47 CEST] <furq> just tried it on a better computer and it did ok
[03:42:15 CEST] <Diag> is the gdigrab whatchamacallit threaded?
[03:42:26 CEST] <Diag> obviously i slump a little in single thread performance
[03:43:05 CEST] <furq> it'll probably be fine
[03:43:17 CEST] <furq> just capture to ffvhuff and then you'll be able to turbo
[03:43:18 CEST] <Diag> cool, i was just about to actually test it here
[03:43:23 CEST] <Diag> kek
[03:43:26 CEST] <furq> https://0x0.st/z4iC.mp4
[03:43:29 CEST] <furq> this is pretty passable
[03:43:39 CEST] <Diag> oh wow yeah
[03:43:41 CEST] <Diag> thats slammin
[03:43:55 CEST] <Diag> I also love tetris
[03:45:07 CEST] <furq> i don't think gdigrab got any better so idk why it actually works now
[03:45:22 CEST] <Diag> Improvements in gdi?      /s
[03:45:24 CEST] <furq> probably because windows 10 is a much better operating system
[03:45:40 CEST] <furq> sorry i have to go and rinse the sick out of my mouth now
[03:45:46 CEST] <Diag> lol
[03:50:36 CEST] <Diag> cpu usage appears lower....
[03:52:12 CEST] <Diag> ok, i see why it was slower
[03:52:17 CEST] <Diag> its slow as balls
[03:52:26 CEST] <Diag> was lower, not was slower*
[03:55:03 CEST] <Diag> oh man i may have just realized what the forklift is goin on
[04:04:13 CEST] <Diag> furq: what the hell happens if you specify the framerate *before* the input stream
[04:04:27 CEST] <Diag> does it do some nonsense with the timestamps to try and pull out some frame timing whatever?
[04:04:46 CEST] <furq> it just sets the input framerate
[04:04:51 CEST] <furq> you'll presumably want -framerate 60
[04:04:58 CEST] <Diag> Yeah but like, how does it determine what to do
[04:04:59 CEST] <furq> it defaults to 30 which i think just means it drops alternate frames
[04:05:01 CEST] <Diag> Ah
[04:05:02 CEST] <Diag> ok
[04:05:22 CEST] <Diag> for some reason i have a feeling that that is whats giving me nonsense, so im trying 60 without specifying it
[04:05:29 CEST] <Diag> because the stream is '60' itself
[04:05:55 CEST] <Diag> so im gonna do that and vsync 0 and see what happens
[04:06:41 CEST] <Diag> getting rid of the input framerate made it stop encoding at 10fps when i set vsync
[04:07:03 CEST] <Diag> much like that kid behind the 7-11, my hopes are high
[04:08:39 CEST] <furq> $ ffmpeg -framerate 60 -f gdigrab -i "title=HEBORIS C7-EX DirectX9" -c:v ffvhuff out.nut
[04:08:42 CEST] <furq> that's all i did
[04:08:56 CEST] <Diag> I switched off of gdigrab because gdigrab was horribly slow
[04:09:00 CEST] <furq> oh right
[04:09:12 CEST] <Diag> http://tyronesbeefarm.com/images/201985b75c16-8476-42e9-a7fc-987503bf7206.mp4
[04:09:24 CEST] <Diag> this is what i got, and ive got no idea if its zerolatency giving me that or what
[04:10:18 CEST] <Diag> it only used like 10% cpu, and before i was using like 25-30
[04:11:20 CEST] <Diag> actually disregard, it DOES seem like its this stupid source thats the issue. gdi
[04:11:37 CEST] <nicolas17> doesn't gdi stand for "god damn it"?
[04:11:38 CEST] <furq> remember zerolatency turns off frame threading
[04:11:45 CEST] <Diag> oh shit yeah
[04:11:51 CEST] <furq> so that might explain the bad cpu usage
[04:12:00 CEST] <furq> and slice threading is restricted by resolution
[04:12:02 CEST] <Diag> Lemme build a command again. I was just trying to be faster
[04:12:29 CEST] <furq> zerolatency is slower, perversely
[04:12:33 CEST] <Diag> Sure
[04:12:38 CEST] <furq> like i said you should never use it
[04:12:41 CEST] <Diag> I mean literally me being faster
[04:12:58 CEST] <Diag> Its default in sharex and i couldnt be assed to generate a new command for ffmpeg with it so i could get around it
[04:13:10 CEST] <nicolas17> zerolatency for creating a video file makes no sense, it's designed for latency-sensitive live streaming
[04:13:14 CEST] <furq> it's there specifically for doing realtime livestreaming for cctv or something where you absolutely must have <1sec latency
[04:13:22 CEST] <Diag> Oh totally
[04:13:27 CEST] <Diag> tell Jaex that
[04:13:29 CEST] <nicolas17> video conferencing, cloud gaming...
[04:13:31 CEST] <furq> i have
[04:13:32 CEST] <Diag> XD
[04:13:34 CEST] <furq> he wasn't interested
[04:13:46 CEST] <Diag> *shakes fist angrily even though i like the software a lot*
[04:14:12 CEST] <Diag> I wonder if theyre gonna be able to put 2 and 2 together and realize i was talking about this in the discord as well
[04:16:29 CEST] <Diag> furq: im gonna assume "frame= 1125 fps= 29 q=-1.0 Lsize=    2304kB time=00:00:37.40 bitrate= 504.7kbits/s dup=869" means that the timestamps not changing on the input frames, or am i just stupid
[04:22:37 CEST] <Diag> nah, gdigrab is just clunky for me :/
[04:47:04 CEST] <Classsic> Hi, somebody know why get stuttering playback when use "-use_wall_clock_as_timestamp 1"?
[04:49:09 CEST] <Classsic> try this command: "ffmpeg -use_wallclock_as_timestamps 1 -probesize 10M -thread_queue_size 2048 -fflags +genpts -i rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov -an -copyts  -fflags +genpts -vcodec libx264 -preset ultrafast   -crf 27  -f flv -|ffplay  -f flv -"
[04:49:56 CEST] <Diag> Wouldnt that be because of discrepancies between timestamps on your machine, and timestamps on the source video?
[04:50:24 CEST] <Classsic> exist any way to fix that?
[04:50:40 CEST] <Diag> Im just talking out of my ass, im really not sure if thats it
[04:51:07 CEST] <Classsic> you can try the command, so verify if get the same results.
[04:51:17 CEST] <Diag> how big is the video file
[04:51:30 CEST] <Classsic> is live rtsp input
[04:51:41 CEST] <Diag> ah ok, gimme a sec ill have a go in a minute
[04:51:42 CEST] <Classsic> very small resolution
[04:51:48 CEST] <Classsic> great
[04:57:29 CEST] <Diag> that command does like, sorta nothing
[04:57:32 CEST] <Diag> for me*
[04:59:14 CEST] <Classsic> play smoothly?
[04:59:24 CEST] <Diag> oh nevermind i see what the deal was with mine
[05:00:21 CEST] <Diag> Yeah, having that extra flag in there makes mine do nothing at all except bitch and drop frames i guess
[05:01:50 CEST] <Diag> Classsic: when i remove the wallclock as timestamps the video plays normally for me
[05:02:26 CEST] <Classsic> I need the this parameter for get sync multiple inputs
[05:02:31 CEST] <Diag> :shrug:
[05:03:26 CEST] <Diag> In that case the stuttering youre getting is very likely exactly what i said it was
[05:03:29 CEST] <Diag> discrepencies
[05:03:37 CEST] <Diag> how else is it going to sync it?
[05:04:07 CEST] <Classsic> right
[05:04:24 CEST] <Classsic> I will try transcoding in the same machine
[05:06:09 CEST] <Classsic> and try, but I don´t think this work
[05:06:09 CEST] <Diag> Well, as far as my nonsense goes, i think i got it resolved...?
[05:09:26 CEST] <Diag> awww yee
[05:09:29 CEST] <Diag> we cookin with GAS now
[05:09:48 CEST] <Diag> http://tyronesbeefarm.com/images/20199590cdb8-16dc-4fdb-a779-3cc51b434022.png
[05:09:49 CEST] <Diag> im kek
[09:22:10 CEST] <WereCatf> How is one supposed to use qsv for decoding under Windows, if the primary GPU is a discrete NVIDIA-one? ffmpeg -hwaccel qsv -c:v h264_qsv crashes with segfault, and using -hwaccel_device either results in the same or complaints about qsv hwaccel not being initialized
[09:24:37 CEST] <JEEB> well, first of all you need to enable the iGPU in your UEFI/bios. then I remember stories of having to make the driver get enabled by telling the windows displays thing that there totally, totally is a screen connected to the iGPU that it just can't find automatically
[09:25:32 CEST] <JEEB> these could have changed since but I remember poking at this like 4 (?) years ago
[09:37:41 CEST] <WereCatf> The funny thing is, encoding with qsv works fine. It's just the decoding-part which doesn't.
[09:42:33 CEST] <WereCatf> To clarify: yes, the iGPU is enabled in BIOS and I have an external HDMI-dongle that pretends to be an actual display. Since it's an actual hardware-dongle, nothing on the PC-side can tell that there isn't a display connected.
[09:48:00 CEST] <JEEB> ok, then I don't know :P
[09:48:23 CEST] <JEEB> I would probably for just decoding utilize d3d11va or dxva2 since after the early 2010s that side should be pretty good on intel as well
[09:48:48 CEST] <JEEB> I've personally moved to utilizing those everywhere instead of switching to QSV on intel
[09:49:18 CEST] <JEEB> of course you'll probably have to pick the correct device with those APIs, which at least through API should be possible (and if it's AVOptions most likely through ffmpeg.c as well)
[10:16:03 CEST] <WereCatf> I just realized what the issue was: Intel's drivers are stupid and won't let me use H/W-decoding if I have the displays in cloned mode instead of expanded in Windows-settings.... >_>
[10:16:46 CEST] <JEEB> ayup
[10:16:47 CEST] <Diag> why would you have them cloned anyways lol
[10:16:50 CEST] <WereCatf> No dxva2, D#D or anything
[10:17:02 CEST] <WereCatf> Because why would I want to expand my desktop to a non-existent display?
[10:17:11 CEST] <Diag> oh, kek
[10:17:22 CEST] <Diag> i havent backread
[10:17:34 CEST] <Diag> im assuming youre trying to use intels acceleration with some real gpu present?
[10:17:41 CEST] <WereCatf> Aye
[10:17:44 CEST] <Diag> kek
[10:17:48 CEST] <JEEB> but yea, it had to be a "real display" as far as I remember. I just stuck it somewhere on the side and hoped no windows ended up there
[10:17:49 CEST] <Diag> they used to have lucidvirtue
[10:18:00 CEST] <Diag> thats apparently dying in september though, so glhf :D
[10:18:08 CEST] <Diag> my old mboard supported it
[10:18:12 CEST] <WereCatf> The problem isn't having a second display
[10:18:22 CEST] <WereCatf> I have an HDMI-dongle that pretends to be one
[10:18:30 CEST] <WereCatf> THe problem is that the drivers are stupid
[10:18:55 CEST] <Diag> https://downloadcenter.intel.com/download/19993/Lucid-Virtu-
[10:19:00 CEST] <Diag> if you havent heard of it
[10:19:16 CEST] <Diag> its retarded but it might come in handy some day
[10:24:54 CEST] <WereCatf> Hm, still can't get qsv-decoding to work, but dxva2 works now. Oh well, that's good enough.
[10:26:42 CEST] <JEEB> yea for decoding I don't really see a proper reason to use QSV any more tbh
[10:26:56 CEST] <JEEB> after intel "fixed their shit" with dxva2 and d3d11
[10:26:59 CEST] <JEEB> *d3d11va
[10:27:24 CEST] <WereCatf> Well, there's the pixel-format conversion that could be skipped
[10:27:52 CEST] <JEEB> uhh, both output NV12 and you can get it as a D3D surface if the QSV encoders take that in
[10:28:02 CEST] <JEEB> unless ffmpeg.c can't optimize that
[10:28:13 CEST] <JEEB> I've mostly dealt with no-copy hwaccels through mpv for playback
[10:28:25 CEST] <JEEB> no-copy just meaning that the texture stays in VRAM
[10:28:38 CEST] <WereCatf> I know, I use that with nvenc/nvdec
[10:29:56 CEST] <WereCatf> I'm just mostly playing with this stuff, I got curious how nvenc's HEVC-output compares to Coffee Lake iGPU's HEVC-output
[10:30:19 CEST] <WereCatf> Now that it's working, Imma check with VMAF
[13:21:05 CEST] <rocktop> is there a way to make text sliding from left to right for 5s in this cmd ? https://bpaste.net/show/QtAS
[13:23:49 CEST] <pink_mist> I have not looked at your cmd, but it's always possible to include some .ass subtitles that displays text which you have used .ass commands to animate
[14:47:05 CEST] <sopparus> hello
[14:47:28 CEST] <sopparus> when restreaming ts to hls I eventually get this 'frame=42571 fps= 50 q=-1.0 Lsize=N/A time=00:14:11.39 bitrate=N/A speed=0.995x'
[14:47:33 CEST] <sopparus> and ffmpeg suddenly stops
[14:47:40 CEST] <sopparus> both video and audio are copy
[14:48:30 CEST] <sopparus> ffmpeg -re -user_agent "$header" -i "$stream"  -c:v copy -c:a copy -sn -hls_flags delete_segments -hls_time 20 -hls_list_size 360 -hls_wrap 0 -use_localtime 0 ""/storage/disk2/re/index.m3u8""
[14:48:34 CEST] <sopparus> is the command im using
[14:49:04 CEST] <sopparus> version is 4.2
[14:52:22 CEST] <sopparus> eh, bad paste
[14:52:26 CEST] <sopparus> video:477kB audio:194kB subtitle:0kB other streams:0kB global headers:0kB
[14:52:26 CEST] <sopparus> muxing overhead: unknown
[14:52:38 CEST] <sopparus> is the last output I see before ffmpeg exits
[14:52:59 CEST] <extrowerk> Hi, i am with HaikuPorts here, and i have some questions:
[14:53:15 CEST] <extrowerk> We have a woring ffmpeg 4.2 port, everythig fine
[14:53:48 CEST] <extrowerk> One have enebled gnutls so smplayer will be able to play youtube videos and things like that
[14:54:20 CEST] <extrowerk> but haiku uses openssl, and we would like to switch to openssl, but the configure script says --enable-nonfree required for openssl.
[14:54:51 CEST] <extrowerk> The question is: if i pass --enable-openssl --enable-nonfree would it enable other things too, or only openssl?
[14:55:36 CEST] <JEEB> enable-nonfree generally leads to binary redistribution problems
[14:55:49 CEST] <JEEB> it actually means "not compatible with (L)GPL"
[14:55:57 CEST] <JEEB> >license="nonfree and unredistributable"
[14:56:14 CEST] <JEEB> the naming could be better, but that's what it means
[14:56:30 CEST] <JEEB> generally it means that GPL licensed stuff was enabled
[14:56:46 CEST] <JEEB> and now you are trying to utilize openssl which is IIRC not compatible with GPL
[14:57:04 CEST] <JEEB> I think most recent openssl might have done something about this, but I think unfortunately this might still be the case
[14:57:32 CEST] <JEEB> of course if you always build locally instead of grabbing binaries, then this might be less of a problem
[14:57:59 CEST] <JEEB> extrowerk: in other things, if you are afraid of any autodetection try using --disable-autodetect
[14:58:15 CEST] <JEEB> it doesn't disable everything (some things deemed "OS" libraries are still autodetected)
[14:58:33 CEST] <JEEB> but it will make you actually request additional external features
[14:58:47 CEST] <JEEB> and --enable-nonfree will set the license for the whole binary that you are building
[15:00:06 CEST] <pink_mist> extrowerk: it would only enable openssl, the only thing enable-nonfree does is change the licensing terms - you now can't redistribute the binary
[15:00:48 CEST] <extrowerk> we definetely want to distribute the generated binaries
[15:01:02 CEST] <pink_mist> then you cannot enable nonfree
[15:01:11 CEST] <extrowerk> no fear from autodetection as we build things in chroot
[15:01:53 CEST] <extrowerk> ok, thanks you for the information
[15:03:39 CEST] <JEEB> extrowerk: do note that if you think that the openssl license has been updated you can check that and request a change in our configure behavior
[15:04:01 CEST] <JEEB> but currently the configure script follows what was valid for the longest time - it's not compatible with GPL :)
[15:04:47 CEST] <extrowerk> JEEB: thank you, will make a mental notes about it.
[15:06:17 CEST] <JEEB> although wait
[15:06:36 CEST] <JEEB> extrowerk: on haiku is the openssl "part of the OS". I wonder how macOS handles that
[15:06:44 CEST] <JEEB> (that first one was a question)
[15:09:17 CEST] <extrowerk> it is a shared lib
[15:10:10 CEST] <extrowerk> not a deeply integrated prt, but haiku needs an ssl implementation to support ssl
[15:10:30 CEST] <extrowerk> but you can build haiku without any.
[15:12:54 CEST] <JEEB> yea, I think you will want to first check the current license of openssl, and if that still is the one incompatible with GPL you can see if the GPL thing about "system libraries" might be applicable in your case
[15:13:03 CEST] <JEEB> but then again, how do you define "part of OS"
[15:13:08 CEST] <JEEB> if you have a package manager
[15:13:26 CEST] <JEEB> anyways, quickly checking if there's macos specific stuff for openssl
[15:13:33 CEST] <JEEB> although I think macOS just has its own schannel
[15:13:42 CEST] <extrowerk> we have package manager,
[15:14:04 CEST] <JEEB> ah no it was securetransport
[15:14:08 CEST] <JEEB> that was the macOS thing
[15:14:26 CEST] <JEEB> but yea, check the license of openssl if it's still incompatible with GPL
[15:14:49 CEST] <JEEB> if it is no more, you could start poking us for "please make newer versions of openssl no longer require enable-nonfree"
[15:17:07 CEST] <pink_mist> it looks to me like openssl is currently licensed under the apache license 2.0 ... which I _believe_ should not require --enable-nonfree
[15:17:53 CEST] <JEEB> yea, that should only require version3
[15:18:11 CEST] <pink_mist> yeah
[15:21:32 CEST] <extrowerk> JEEB: english is not my mother tongue, and i wouldn't like to be involved in licensing stuff. From my personal viewpoint what is not MIT/BSD or Public license is not open source, but thats private opinion. Will share the gathered info with the haiku devs and they will decide
[15:22:16 CEST] <pink_mist> https://github.com/openssl/openssl/commits/master/LICENSE <-- seems this happened in december 2018, so if your openssl is from before then, you might still need to --enable-nonfree
[15:25:39 CEST] <JEEB> extrowerk: yea I didn't want you to start reading into it that way. just checking if the license was still the old OpenSSL license
[15:25:52 CEST] <JEEB> as pink_mist checked, it seems like they have an apache v2 alternative now
[15:26:05 CEST] <JEEB> which is not compatible with GPLv2, but it is compatible with v3
[15:26:17 CEST] <pink_mist> it's not an alternative - the license has been completely replaced by apache v2
[15:26:26 CEST] <JEEB> oh
[15:26:41 CEST] <JEEB> that makes it much simpler as long as the version is known when it switched
[15:26:44 CEST] <JEEB> that way it could be checked for
[15:26:50 CEST] <pink_mist> yup
[15:27:47 CEST] <extrowerk> JEEB: our current openssl is 1.0.2s, which released in may 2019, so it should be fine
[15:28:16 CEST] <JEEB> https://www.openssl.org/source/license.html
[15:28:28 CEST] <JEEB> according to openssl, only 3.0.0 release is apache v2
[15:28:39 CEST] <JEEB> 1.x is still openssl
[15:28:51 CEST] <JEEB> so 1.x is still not compatible with GPL
[15:29:04 CEST] <extrowerk> Meh
[15:30:28 CEST] <pink_mist> yeah, just checked the commits that went into the latest 1.0.2 and 1.1.0 and 1.1.1 branches, and none of them had the new license
[15:30:29 CEST] <extrowerk> well, we probably could switch to openssl3, but i have to talk about it with the fellow devs
[15:30:32 CEST] <extrowerk> thanks for the info
[15:31:37 CEST] <pink_mist> 3.0.0 hasn't been released yet afaik - they're referring to current git master
[15:32:30 CEST] <extrowerk> guys, i can't seem to find any openssl3, are you guys sure about the version number?
[15:32:56 CEST] <pink_mist> read what I just said
[15:33:22 CEST] <extrowerk> ah, ok, thanks
[15:34:16 CEST] <extrowerk> gathered the required info and shared with the team, lets see what they think
[15:34:22 CEST] <extrowerk> thank you guys
[15:35:22 CEST] <pink_mist> I'm an ffmpeg packager myself, so I like to be on top of this stuff too =)
[15:37:24 CEST] <pink_mist> unfortunately, libressl seems to still be using the openssl and ssleay dual license for the code they inherited from openssl
[16:48:05 CEST] <marso> I'm using ffmpeg and ffplay for a point-to-point audio stream on a local network and I'm looking to get as close to 0 latency as possible.  The audio source is direct capture of a headset microphone.  Which ffmpeg/ffplay options have the greatest effect on latency reduction for this use case?
[16:52:36 CEST] <BtbN> It's almost impossible to get zero latency without a custom application that eliminates all buffer
[16:52:47 CEST] <BtbN> the generic ffmpeg.c code favors stability over latency
[20:20:10 CEST] <thnee> When using ffprobe normally, it shows this line which includes the fps: Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, 435 kb/s, 24 fps, 30 tbr, 600 tbn, 1200 tbc (default)
[20:21:40 CEST] <thnee> But when running with "-print_format json -show_format -show_streams" (and only using the stdout ofcourse), there is no 24fps to be seen anywhere. The closest I can find is r_frame_rate, but it says "30/1"?
[20:22:24 CEST] <thnee> Why is this information not included in the json output? I was kinda expecting it to be the same data, just in a different format?
[20:26:13 CEST] <JEEB> it should be the same data
[20:26:37 CEST] <JEEB> not sure if you get that value without initializing the decoder context, but I am not sure how much I would trust it
[20:26:45 CEST] <JEEB> since you have a time base of 30/1 supposedly
[20:27:02 CEST] <JEEB> you can get the timestamps for frames with -show_packets
[20:27:06 CEST] <JEEB> before decoding that is
[20:30:16 CEST] <thnee> JEEB: -show_packets basically means count all the frames myself? That seems excessive.
[20:30:32 CEST] <JEEB> then you can yourself decide how much you want to calculate
[20:30:48 CEST] <JEEB> if you're OK with coming up with some sort of average from the first N frames
[20:30:49 CEST] <JEEB> etc
[20:30:55 CEST] <JEEB> you get the time bases and timestamps
[20:30:59 CEST] <thnee> It just seems odd to me that it doesnt show the 24 fps when using -print_format json -show_format -show_streams (and no other arguments)
[20:31:10 CEST] <JEEB> probably because that might come from decoder context
[20:31:17 CEST] <JEEB> if I had my development VM open I could check :P
[20:31:25 CEST] <JEEB> if you don't start decoding you have no decoder context
[20:31:57 CEST] <thnee> Could you please elaborate on that? What does ffprobe do differently to initialize a decoder context when its not being told to output as json?
[20:32:30 CEST] <kepstin> i'd expect that field in the log to correspond to avg_frame_rate in the json, but i can't actually remember what that log message prints.
[20:33:13 CEST] Action: JEEB boots up his VM to be more useful
[20:35:09 CEST] <thnee> kepstin: Ah yes, avg_frame_rate is 43470/1811 which comes out to 24. Hmm ok interesting :)
[20:35:45 CEST] <kepstin> line 515 in https://www.ffmpeg.org/doxygen/trunk/dump_8c_source.html#l00457 is where that's printed
[20:36:13 CEST] <kepstin> that log output shows avg_frame_rate if present, otherwise r_frame_rate, otherwise the container time base inverted, otherwise the codec time base inverted.
[20:36:30 CEST] <kepstin> all those info fields should be in the json output.
[20:36:33 CEST] <JEEB> yes
[20:37:09 CEST] <thnee> Thanks for that! The reason I want frame rate is to calculate the gop size for a conversion to HLS. I am thinking if setting -hls_time to 2 and then set the -g and -keyint_min to 2 * frame_rate. Does that make sense?
[20:39:38 CEST] <kepstin> thnee: for constant framerate content, yeah. you might want to add something (fps filter or -r output option) to enforce that your output is cfr, since ffprobe can only guess in some cases.
[20:40:59 CEST] <thnee> How do I know if it is CFR to begin with? -_-
[20:42:29 CEST] <kepstin> only way to know for sure is by looking at each frame pts and seeing if they go up by the same amount (+- a bit to account for rounding errors) each frame :/
[20:43:16 CEST] <kepstin> in practice, most professionally produced video will be cfr, most webcam and cellphone video will not be.
[20:44:13 CEST] <thnee> I see. All my content will be from smart phones, ios and android. So if it's not CFR, the hls_time logic that I said is no good?
[20:44:38 CEST] <JEEB> yea mobile video generally tends to be VFR due to the cameras not pushing out pictures at a stable rate
[20:44:40 CEST] <kepstin> yeah, since depending on light conditions cell phones will change the framerate over time
[20:45:12 CEST] <JEEB> the hls_time logic should be OK since that's time based I guess
[20:45:30 CEST] <JEEB> the GOPs will of course become shorter/longer in time
[20:45:38 CEST] <kepstin> the problem is that if framerate changes, then the keyint based on framerate will no longer line up
[20:45:45 CEST] <thnee> Apparently this is important when doing multi bitrate HLS so that the segments sync up, I read somewhere.
[20:46:30 CEST] <JEEB> thnee: not really true. there are some really crappy clients but while looking at android (exoplayer etc) and iOS you definitely don't need to even match GOPs between profiles :P
[20:46:32 CEST] <thnee> Ah ok, so the gops will be different, but still sync across the different qualities?
[20:46:47 CEST] <JEEB> but of course it makes some things simpler in some implementations if you match GOPs
[20:47:02 CEST] <JEEB> but for making sure GOPs match there's two ways of doing that
[20:47:21 CEST] <JEEB> 1. you do a pre-pass to figure out where the keyframe points are (or use one of the encoders as your "master")
[20:47:44 CEST] <JEEB> 2. set some GOP length with -g XXX and then -x264-params scenecut=0
[20:47:44 CEST] <JEEB> :P
[20:47:53 CEST] <JEEB> which disables dynamic scenecuts
[20:48:03 CEST] <kepstin> a fixed keyframe interval should work fine, it just means that your hls segments won't be fixed length with vfr.
[20:48:05 CEST] <JEEB> of course not optimal which is why 1. is recommended
[20:48:56 CEST] <thnee> Yeah I am doing -hls_time, -keyint_min and -sc_threshold 0. Hm ok thanks a lot, prepass sounds nicer, less hard coded logic on my part.
[20:50:58 CEST] <thnee> So is that the -pass flag you are referring to?
[20:55:26 CEST] <JEEB> not really, but I guess you could generate a frame type file with that
[20:56:04 CEST] <thnee> Ok so what did you mean by pre-pass?
[20:56:26 CEST] <JEEB> generally people run something like xvid's (yes, lol) keyframe decision algorithm through a clip, and then generate a keyframe list
[20:56:31 CEST] <thnee> This is what my current code is https://dpaste.de/LEWO  Insert dog-in-lab-coat.jpg here
[20:56:49 CEST] <JEEB> scxvid it used to be called I think
[20:57:01 CEST] <JEEB> but anyways, not sure how easily that can be done with just the command line app ÖP
[20:57:04 CEST] <JEEB> :P
[20:57:10 CEST] <JEEB> most people start using the API at some point
[20:58:19 CEST] <thnee> Using libav? Yeah I have seriously considered it, but it seems like a big step. But that would make this problem easier you say?
[20:59:13 CEST] <JEEB> depending on your needs.
[20:59:48 CEST] <JEEB> in theory I guess you could see if you could just run ffmpeg.c for decoding and pipe the decoded video to scxvid and get a keyframe file :P
[20:59:55 CEST] <JEEB> then feed that to all encoders
[21:01:52 CEST] <durandal_1707> what?
[21:02:12 CEST] <durandal_1707> extracting scene changes or?
[21:02:16 CEST] <JEEB> yes
[21:02:33 CEST] <durandal_1707> just single frame?
[21:02:49 CEST] <JEEB> a full keyframe list based on something like xvid's scenecut
[21:03:00 CEST] <JEEB> I don't think if the algorithm is actually good, but it's nicely matching actual scenecuts
[21:03:03 CEST] <JEEB> unlike x264's
[21:03:50 CEST] <durandal_1707> isnt that same as select filter and its scenechange feature?
[21:04:07 CEST] <JEEB> could be, can you extract a list nicely out of it into a file you can throw into x264?
[21:04:28 CEST] <thnee> But then again, youre saying this doesnt matter for HLS. The segments can actually be out of sync and it will work fine in modern players?
[21:04:34 CEST] <JEEB> yes
[21:04:35 CEST] <thnee> Would the same be true for dash as well?
[21:05:06 CEST] <JEEB> yes, pretty sure each representation can have its own Media timeline
[21:05:20 CEST] <JEEB> I've so far found one DASH implementation that sucked at that
[21:05:29 CEST] <JEEB> and that was the Microsoft PlayReady SDK on iOS
[21:05:34 CEST] <JEEB> and I have no idea why you'd use that
[21:07:05 CEST] <JEEB> none of the specs ever required the keyframes to match as far as I can tell, and for a reason. I've also heard this recommendation a *lot* but in the end I've only found a few devices that really require it (And they're never the major ones, thankfully)
[21:07:49 CEST] <thnee> Hm allright sounds very promising indeed. Thank you kindly!
[21:08:29 CEST] <JEEB> of course if you hit one of those devices (usually TVs [samsung 2015 tizen comes to mind], playread DASH on iOS), then you know what to do. Or you just give those a blank stare :P
[21:08:50 CEST] <JEEB> or work around in another way like only giving some clients a single rendition to play with
[00:00:00 CEST] --- Tue Aug 27 2019


More information about the Ffmpeg-devel-irc mailing list