[Ffmpeg-devel-irc] ffmpeg.log.20190831

burek burek at teamnet.rs
Sun Sep 1 03:05:04 EEST 2019


[01:17:13 CEST] <kevinnn> does anyone have any suggestions as to how to make x264 compress quicker?
[01:17:29 CEST] <kevinnn> Here's the tunings I currently have: https://pastebin.com/KecTc1Js
[01:18:42 CEST] <BtbN> use a faster preset
[01:19:06 CEST] <kevinnn> I have tuned ultrafast
[01:19:09 CEST] <kevinnn> as you can see
[01:19:21 CEST] <kevinnn> I am trying to encode at 60fps
[01:19:32 CEST] <kevinnn> but the encoder is barely able to encode at 60fps
[01:19:52 CEST] <kevinnn> often taking anywhere from 10ms to 20ms
[01:24:40 CEST] <kevinnn> x264 is also using a fairly high amount of CPU as well, pretty much putting all 4 of my cores at 50%. anyway to reduce this?
[01:39:31 CEST] <furq> kevinnn: do you actually need zerolatency
[01:42:25 CEST] <furq> i guess from intra refresh and the fact that you're measuring frame timings that you do
[01:45:23 CEST] <kevinnn> furq: yes it is a live stream
[01:45:30 CEST] <kevinnn> sorry for taking a few minutes to reply
[01:45:41 CEST] <kevinnn> so I really need this to be live
[01:46:04 CEST] <kevinnn> I am willing to sacrifice on quality big time if it'll save me CPU
[01:46:14 CEST] <kevinnn> can't sacrifice on FPS...
[01:46:30 CEST] <kevinnn> also I don't control the resolution of the images coming in
[01:50:03 CEST] <furq> there's not really any more you can do to sacrifice quality
[01:50:15 CEST] <furq> you could probably bump the thread count though
[03:30:56 CEST] <fling> How to buffer for a second before the output?
[05:16:43 CEST] <kepstin> kevinnn: your use case is a good candidate for a hardware encoder if you have one on the system (intel quicksync? nvidia gpu?)
[05:27:27 CEST] <fling> How do I read a pipe from gstreamer?
[05:27:48 CEST] <ossifrage> mp4 is a really annoying format, not being able to play a stream that is currently being encoded
[05:28:00 CEST] <fling> ossifrage: try nut
[05:28:07 CEST] <fling> ossifrage: you can also have multiple outputs
[05:28:33 CEST] <ossifrage> nut?
[05:28:58 CEST] <fling> nut
[05:29:07 CEST] <ossifrage> what is nut?
[05:29:22 CEST] <ossifrage> (google was not helpful)
[05:29:39 CEST] <fling> ffmpeg -f whatever & -c:v something /where/is/your/output.nut
[05:29:47 CEST] <fling> ossifrage: just another container
[05:29:50 CEST] <ossifrage> ah
[05:29:56 CEST] <fling> ossifrage: also survives out of space condidions :P
[05:32:44 CEST] <furq> ossifrage: https://ffmpeg.org/nut.html
[05:33:40 CEST] <ossifrage> I guess nut could be useful for playing, but sadly for what I'm trying to do I'm stuck with mp4
[05:34:02 CEST] <fling> ossifrage: what are you trying to do?
[05:34:05 CEST] <furq> you can just remux to mp4 when you're done
[05:34:29 CEST] <fling> ossifrage: also you can have the second nut pipe output to ffplay
[05:34:35 CEST] <fling> ossifrage: to be able to watche what are you recording
[05:36:32 CEST] <ossifrage> okay nut is cool: " File is apparently being appended to, will keep retrying with timeouts."
[05:38:55 CEST] <ossifrage> fling, the end result needs to be playable in a web browser without help, so h.264+mp4 is pretty much the only option
[05:39:36 CEST] <fling> >> < ossifrage> mp4 is a really annoying format, not being able to play a stream that is currently being encoded
[05:39:48 CEST] <fling> just add the second nut output and pipe it to ffplay
[05:39:59 CEST] <fling> to be able to play it
[05:40:53 CEST] <furq> that's not really ideal because that'll kill the whole process if you close the player
[05:41:02 CEST] <ossifrage> (this is kinda funny to watch, the input is 4fps, the output is 30fps, but it is being heavily buffered, so it has this neat start/stop playback)
[05:41:54 CEST] <furq> with that said if this is a live input then using mp4 will also give you an unplayable file if anything goes wrong
[05:42:01 CEST] <furq> so you should really use something else and remux
[05:42:24 CEST] <furq> (this is true for all inputs but presumably if an encode of a file source drops out you'd just start again)
[05:43:37 CEST] <ossifrage> furq yeah it is coming off a live camera (I was playing with making a timelapse) but if do it for real I'll be doing it in C not in the shell
[05:44:04 CEST] <furq> you'd still have the same problem if you're using lavf
[05:46:46 CEST] <ossifrage> Yeah, it is the silly stuff in this project that has been hard. Dealing with the hardware is easy, multiplexing the video and getting it to play back in a browser has been a pain in the ass
[05:47:58 CEST] <fling> tree live inputs/outputs not worked fine with ffmpeg so I tried to use gstreamer istead
[05:48:08 CEST] <fling> but with gstreamer I'm getting fps spikes on the output
[05:48:21 CEST] <fling> it is playing at 1000fps then 0fps for couple of seconds
[05:49:49 CEST] <fling> The idea is to pipe it via ffmpeg to fix fps :P
[05:53:03 CEST] <fling> So how do I buffer the input for a second before starting the output?
[05:54:00 CEST] <ossifrage> is there a way to force the file input to retry with timeouts? When I was coding at 4fps mpv/ffmpeg would detect the file being appended, but at 1fps it doesn't
[05:54:33 CEST] <fling> ossifrage: what are you trying to do? :P
[05:54:44 CEST] <fling> maybe use pipe instead of the file?
[05:55:48 CEST] <ossifrage> fling, I was amazed it magically detected that the file was being appended to (input 4fps, output 30fps) but at 1fps in the time between writes was too long for it to detect the append
[05:56:27 CEST] <ossifrage> (there is a follow option)
[06:02:03 CEST] <ossifrage> Ah, it gets upset near the live edge: "Last frame must have been damaged 36438323 > 36385256 + 32767"
[06:03:43 CEST] <fling> named pipe
[06:08:33 CEST] <ossifrage> fling, got it to work with 'ffplay -follow 1 blah.nut
[06:09:13 CEST] <ossifrage> the data is safe on disk and I get to watch it consume frames
[06:09:34 CEST] <fling> ok
[06:14:01 CEST] <ossifrage> at 0.25fps, the internal ffmpeg buffer is really large
[06:32:30 CEST] <fling> Is not there a workaround for three or more live inputs/output?
[10:40:33 CEST] <fling> How to pipe from gstreamer properly?
[10:40:47 CEST] <fling> Can I tell it to buffer the input for a second before doing output?
[11:16:58 CEST] <durandal_1707> this is not gstreamer support channel
[13:54:46 CEST] <richar_d> does anyone know what options to pass to ffmpeg to output iOS-compatible video?
[13:55:10 CEST] <BtbN> define "iOS Compatible video".
[13:56:04 CEST] <richar_d> video that plays on recent Apple devices
[13:56:15 CEST] <BtbN> And what constraints does that impose?
[13:56:23 CEST] <BtbN> Does it not play any standard formats?
[14:02:25 CEST] <richar_d> they support, among other formats, MPEG-4, YUV420p, AVC, Main L3.1, and I have a command that outputs videos that meet this specification, but I remember having some problems playing them when I last attempted this a few months ago. I was hoping someone else had done the hard work&
[14:05:27 CEST] <BtbN> I'd expect them to play any standard mp4
[14:05:32 CEST] <BtbN> if not, they're not worth their money really
[14:06:43 CEST] <pink_mist> well they aren't worth their money
[14:06:54 CEST] <pink_mist> half the price and they'd be much closer
[14:09:24 CEST] <JEEB> richar_d: the container depends on what you're trying to do (streaming, files stuck on the device), but generally any modern iOS device since 3GS or whatever it was should  more or less take as video H.264 high or main profile, level 4.1 with any sane resolution. and then audio is AAC.
[14:09:31 CEST] <JEEB> it's not really harder than that :P
[15:32:04 CEST] <mknod> Hello.
[15:32:08 CEST] <mknod> cat <(ffmpeg -i music.m4a -f singlejpeg -) > /dev/null
[15:32:25 CEST] <mknod> Does anyone know why ffmpeg hangs?
[15:44:14 CEST] <durandal_1707> mknod: from where you got that line?
[15:44:57 CEST] <mknod> durandal_1707, I wrote it myself. Why I need or want to use this syntax is not really relevant though... Since this is just supposed to work :)
[15:45:50 CEST] <mknod> this specific line is just for illustration
[15:46:10 CEST] <durandal_1707> i first time see that
[15:46:51 CEST] <durandal_1707> which shell is that?
[15:47:04 CEST] <mknod> Bash
[15:47:17 CEST] <durandal_1707> and what you try to accomplish?
[15:48:07 CEST] <durandal_1707> -f singlejpeg muxer does not exist
[15:48:14 CEST] <mknod> avoiding to write a temporary file by using a process substitution instead
[15:48:15 CEST] <mknod> https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Process-Substitution
[15:48:48 CEST] <mknod> durandal_1707, it's not my problem. `ffmpeg -i music.m4a -f singlejpeg - > /dev/null` does work.
[15:55:58 CEST] <durandal_1707> doesnt hang here
[15:56:33 CEST] <mknod> durandal_1707, I got it. It's somehow waiting for user input on stdin
[15:56:51 CEST] <mknod> so feeding stdin with /dev/null fixed it
[15:57:16 CEST] <JEEB> durandal_1707: it does funny enough. see -formats |grep "jpeg"
[15:58:11 CEST] <JEEB> mknod: stderr should contain anything ffmpeg.c outputs and I can't see in that limited command line anything that would make it request stdin input o_O
[15:58:39 CEST] <durandal_1707> JEEB: committer is guess who, its silly change for different mime type
[15:59:02 CEST] <JEEB> I know that if you were to have a normal file as output it would ask if you want to rewrite it, but I don't see any other reason for it to require input o_O
[15:59:08 CEST] <JEEB> "-" is stdout after all
[16:00:21 CEST] <mknod> JEEB, I'm guessing it's because you can interact with it using keyboard inputs?
[16:00:55 CEST] <JEEB> mknod: you technically can with stuff like left/right arrow to change the log level, but it shouldn't *require* any of that
[16:01:33 CEST] <JEEB> I've got plenty of shell scripts where I set output to stdout just to make sure FFmpeg can't use any tricks it could utilize when having something seekable written
[16:01:44 CEST] <JEEB> and it never required a forced input from the shell
[16:02:14 CEST] <mknod> just running ffmpeg in the background results in the same weird behavior
[16:02:24 CEST] <mknod> ffmpeg -i music.m4a -f singlejpeg - > /dev/null &
[16:02:29 CEST] <mknod> ... hangs forever
[16:02:42 CEST] <JEEB> if true sounds like a boog in ffmpeg.c
[16:03:05 CEST] <JEEB> I wonder if there's an option to disable the key inputs if that really is what's blocking it
[16:03:10 CEST] <mknod> I swear it's true
[16:03:37 CEST] <JEEB> ah
[16:03:39 CEST] <JEEB> -stdin 0
[16:03:42 CEST] <JEEB> try that
[16:03:55 CEST] <JEEB> that seems to control the boolean stdin_interaction
[16:04:17 CEST] <JEEB> which then controls calls to "check_keyboard_interaction"
[16:05:55 CEST] <mknod> seems promising especially after reading the manpage but, same issue
[16:06:33 CEST] <mknod> and they do mention < /dev/null as an alternative
[16:06:40 CEST] <TheDrode> Morning!
[16:06:43 CEST] <JEEB> lol
[16:07:12 CEST] <JEEB> anyways, you can check if there's an issue already on trac regarding this. not sure how much one can do about it and which part is then causing it :P
[16:08:15 CEST] <mknod> JEEB, let's consider this solved for now, I seem to have another problem
[16:09:25 CEST] <mknod> ffmpeg -loop 1 -framerate 2 -i <(ffmpeg -i "$file_input" -f singlejpeg - < /dev/null) -i "$file_input" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p "$file_output"
[16:09:36 CEST] <mknod> this is the full command
[16:09:58 CEST] <mknod> I can't figure out how to force the input format to JPEF
[16:10:01 CEST] <mknod> JPEG*
[16:10:54 CEST] <JEEB> could it be that you want to just get the embedded image out?
[16:11:02 CEST] <mknod> to a temporary file?
[16:11:09 CEST] <JEEB> no, I mean in general
[16:11:09 CEST] <mknod> please, no!
[16:11:32 CEST] <JEEB> like if you are taking in a file re-encoding the video track into it extracting it as a JPEG
[16:11:39 CEST] <JEEB> *-extracting it
[16:11:41 CEST] <TheDrode> I am still trying to get the colable5011 device to work, the video is mostly clean but, i keep getting this artifacts sometimes, this is the command:
[16:11:42 CEST] <TheDrode> https://anonymousfiles.io/yKP15lF4/
[16:11:45 CEST] <JEEB> because that's not extraction but re-encoding
[16:11:53 CEST] <TheDrode> ffmpeg -loglevel debug -f hls -re -i  http://192.168.201.3:5080/LiveApp/streams/005903764339232681635560.m3u8 -vcodec copy -s 1920x1080 -deinterlace  -tune zerolatency -acodec mp2 -ab 64k -ac 1 -ar 44100 -f rtp_mpegts -fec prompeg=l=10:d=10 -r 23 rtp://10.10.13.3:1001
[16:11:57 CEST] <mknod> JEEB, the input file is an audio file
[16:11:59 CEST] <TheDrode> nothing weird on logfile
[16:12:05 CEST] <JEEB> mknod: I could gather that
[16:12:46 CEST] <mknod> I may have missed your point
[16:13:07 CEST] <JEEB> but then you are making a video out of it, and you want that to be JPEG? at that point I'm starting to question if you actually needed to re-encode it
[16:13:23 CEST] <JEEB> thus I'm trying to gather what the overall thing you're trying to do is
[16:13:24 CEST] <mknod> JEEB, no, the video is mkv
[16:14:14 CEST] <mknod> basically I'm trying to convert a properly tagged audio file into a youtube friendly video, using the audio AND the cover image from the input file
[16:14:27 CEST] <JEEB> ok, so in that case you do not want JPEG for the output of the video
[16:14:44 CEST] <mknod> the output of the video is mkv
[16:14:45 CEST] <JEEB> also if you are making a *video* with more than one frame, stillvideo will not be to your liking I think
[16:14:56 CEST] <mknod> file_output="${meta_artist} - ${meta_title}.mkv"
[16:14:59 CEST] <mknod> ^ this
[16:15:00 CEST] <JEEB> although I'm not sure if stillvideo sets keyint to 1
[16:15:40 CEST] <JEEB> mknod: ok so what you want is for the image to be repeated at a certain frame rate until the audio finishes
[16:16:17 CEST] <mknod> JEEB, I used an example from the ffmpeg website as a starting point, and this does work if I provide a regular JPEG file
[16:16:37 CEST] <mknod> me being too lazy to use an intermediate temporary file is the problem here
[16:16:59 CEST] <JEEB> I'm not fully understanding why you just don't map the "video" stream from the audio file to begin with
[16:17:14 CEST] <JEEB> why teh whole subprocess and all
[16:17:29 CEST] <JEEB> even temporary files sound weird in this case
[16:17:32 CEST] <kepstin> mknod: hmm? no intermediate file needed, adding a static picture to an audio track can be done easily with a single ffmpeg command
[16:17:36 CEST] <JEEB> yea
[16:18:04 CEST] <JEEB> kepstin: if you have the filter to duplicate frames at a frame rate and all in your memory or link somewhere, please feel free
[16:18:28 CEST] <mknod> kepstin / JEEB, that's where I started: http://trac.ffmpeg.org/wiki/Encode/YouTube
[16:18:30 CEST] <JEEB> I don't remember the video filter that duplicates a frame so you can use -shortest and end when the music finishes
[16:18:51 CEST] <mknod> I'm tweaking the command from the third example
[16:19:20 CEST] <kepstin> ffmpeg -i audio.mp4 -i picture.jpg -vf loop=loop=-1:size=1:start=0 -map 0:a -map 1:v -c:a copy -shortest output.mp4
[16:19:33 CEST] <JEEB> kepstin: he has a music file with tagged picture
[16:19:45 CEST] <JEEB> so he probably wants to map the video and audio track from the audio file
[16:20:12 CEST] <kepstin> oh, does it show up as a video track in the ffprobe output on the file?
[16:20:13 CEST] <mknod> yes. I want to use the embedded image (could equally use exiftool for the job)
[16:20:25 CEST] <kepstin> if so, drop the -i picture.jpg and change the -map 1:v to -map 0:v
[16:21:47 CEST] <JEEB> also looking at the example on that trac page just a remux into mkv apparently works with just one frame in the video?
[16:22:03 CEST] <JEEB> I have no idea if that actually works with youtube, but if it does that's actually quite a bit more simple
[16:22:18 CEST] <JEEB> this example http://trac.ffmpeg.org/wiki/Encode/YouTube#Utilizingtheincludedalbumcoverart
[16:22:19 CEST] <mknod> looks like because the video file is exactly the same size as the audio file
[16:24:32 CEST] <mknod> JEEB, nice find
[16:25:12 CEST] <kepstin> note that even if that works in youtube, it's problematic in general since many players will have trouble seeking in such a file.
[16:25:24 CEST] <JEEB> kepstin: youtube does frame rate conversion anyways
[16:25:29 CEST] <JEEB> so as long as it takes that in, it should be ok
[16:27:45 CEST] <mknod> I'm still curious as to how force the input format when providing a anonymous pipe instead of a regular file
[16:28:07 CEST] <JEEB> container format is -f before -i
[16:28:08 CEST] <mknod> such as -i <(file content is output to stdout here)
[16:28:26 CEST] <JEEB> for example -f mpegts -i -
[16:28:32 CEST] <mknod> yes, but couldn't find any working forlat for JPEG
[16:28:40 CEST] <mknod> format*
[16:28:50 CEST] <JEEB> jpeg_pipe or mjpeg
[16:29:01 CEST] <mknod> "The video has failed to process. Please make sure you are uploading a supported file type."
[16:29:08 CEST] <JEEB> aww
[16:29:15 CEST] <JEEB> then go with the loop filter kepstin noted
[16:29:24 CEST] <JEEB> with just mapping the video and audio streams from the input file
[16:30:27 CEST] <JEEB> mknod: you can see formats by doing `ffmpeg -formats |grep jpeg` for example
[16:30:36 CEST] <mknod> JEEB, yes, I did already
[16:30:49 CEST] <mknod> and neither jpeg_pipe nor mjpeg helped
[16:30:52 CEST] <JEEB> mjpeg I think is the classic thing for raw JPEG. D means it's implemented for inputs, E for outputs
[16:31:07 CEST] <JEEB> mknod: that's really weird...
[16:34:05 CEST] <mknod> that's the current (non working) command:
[16:34:17 CEST] <mknod> ffmpeg -loop 1 -framerate 2 -f jpeg_pipe -i <(ffmpeg -i "$file_input" -f singlejpeg - < /dev/null) -i "$file_input" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p "$file_output"
[16:34:35 CEST] <mknod> singlejpeg would have been more than adequate
[16:34:40 CEST] <JEEB> btw, I think you should give kepstin's last single-input alternative a go first :P
[16:34:53 CEST] <JEEB> since that definitely seems like the sane solution instead of doing a random JPEG re-encode in the middle
[16:34:55 CEST] <mknod> I failed to understand their solution
[16:35:09 CEST] Action: mknod never played with ffmpeg before
[16:35:30 CEST] <JEEB> you have a single input, with two streams. you map the video and audio. you pass the video stream into a filter that you tell to duplicate that first frame for all eternity
[16:35:44 CEST] <kepstin> here, i'll retype it all together: ffmpeg -i "$file_input" -vf loop=loop=-1:size=1:start=0 -map 0:a -map 0:v -c:a copy -shortest output.mp4
[16:36:02 CEST] <JEEB> then you map the audio stream and have that copied or re-encoded. then add -shortest to finish when the shortest stream finishes (which will be audio)
[16:36:18 CEST] <JEEB> (because the video is an eternal loop)
[16:36:23 CEST] <kepstin> and note that you should also add the encoding options (-pix_fmt yuv420p in particular)
[16:36:52 CEST] <JEEB> because while looking into the peculiarities of FFmpeg is fun, just using a single command seems like the much saner alternative to get to where you seemingly want to go
[16:37:17 CEST] <JEEB> without temporary files or using a sub-process to generate another JPEG out of the original JPEG
[16:37:47 CEST] <JEEB> (we can then look into the peculiarities if you still care after you've got your workflow rolling)
[16:37:47 CEST] <mknod> <JEEB>	you have a single input, with two streams.
[16:38:13 CEST] <mknod> are you referring to the embedded cover art as the second stream?
[16:38:23 CEST] <JEEB> yes, because FFmpeg most often exposes it so
[16:38:29 CEST] <JEEB> (with audio files)
[16:39:20 CEST] <mknod> mmhm, what an unfortunate way to handle the metadata
[16:39:33 CEST] <mknod> (which an embedded image is)
[16:40:03 CEST] <JEEB> at least we then got a field that gets transferred if the demuxer notes it's cover art
[16:40:16 CEST] <JEEB> so that applications not really interested in cover art but only actual video streams can handle it
[16:41:43 CEST] <mknod> ffmpeg -i "06 Allies.m4a" -vf loop=loop=-1:size=1:start=0 -map 0:a -map 0:v -c:a copy -shortest -pix_fmt yuv420p output.mp4
[16:41:55 CEST] <mknod> is that correct kepstin?
[16:42:54 CEST] <kepstin> something like that, yeah.
[16:43:27 CEST] <kepstin> (note that in most ffmpeg builds, mp4 defaults to using libx264 video encoder with -crf 23 -preset medium, consider adding encoder options if you want something else)
[16:43:28 CEST] <mknod> for some reason ffmpeg always complains when requiring it to write a .mp4, but .mkv seems to work
[16:43:55 CEST] <JEEB> are you by chance outputting to a pipe?
[16:44:13 CEST] <mknod> nope
[16:44:26 CEST] <JEEB> then you might want to add -v verbose there and pastebin the log
[16:44:28 CEST] <JEEB> thank you
[16:44:38 CEST] <JEEB> and link the pastebin or whatever here
[16:45:41 CEST] <mknod> what verbose level would you want me to use?
[16:46:03 CEST] <JEEB> -v verbose
[16:46:10 CEST] <JEEB> debug is way too verbose and verbose adds useful info
[16:47:47 CEST] <mknod> https://termbin.com/pnq9
[16:47:53 CEST] <mknod> redirected both stderr and stdout
[16:48:00 CEST] <JEEB> ah, alac
[16:48:27 CEST] <JEEB> separate -c:a copy with -b:a 192k -c:a aac or so :)
[16:48:33 CEST] <JEEB> s/separate/replace/
[16:48:39 CEST] <mknod> I'm literally lost in translation
[16:49:07 CEST] <JEEB> basically the mp4 writer is crying foul when being fed alac audio
[16:49:14 CEST] <JEEB> which -c:a copy is copying from input
[16:49:27 CEST] <JEEB> not sure if that crying foul at this point is valid, but that's what you've got
[16:49:49 CEST] <JEEB> thus, switching from copying the stream to re-encoding to 192 kbps AAC
[16:49:58 CEST] <JEEB> which will go into the gaping mouth of the mp4 writer fine
[16:50:33 CEST] <JEEB> hopefully this explains what the difference is and what went wrong in that log of yours
[16:50:38 CEST] <JEEB> specifically > [mp4 @ 0x7fcf9980a800] Could not find tag for codec alac in stream #0, codec not currently supported in container
[16:50:43 CEST] <JEEB> is the log line that showed the error
[16:51:48 CEST] <JEEB> it would probably let you write the same thing from the smae module by just replacing mp4 with m4a which changes the mode the mp4 writer is operating under ;)
[16:51:54 CEST] <mknod> JEEB, do you want me to provide the ALAC?
[16:51:56 CEST] <JEEB> or mov
[16:52:00 CEST] <JEEB> mknod: no need for that
[16:52:19 CEST] <mknod> ffmpeg -v verbose -i "06 Allies.m4a" -vf loop=loop=-1:size=1:start=0 -map 0:a -map 0:v -b:a 192k -c:a aac -shortest -pix_fmt yuv420p output.mp4
[16:52:28 CEST] <JEEB> yup
[16:52:36 CEST] <JEEB> that should go OK
[16:53:06 CEST] <mknod> https://termbin.com/o75p
[16:53:57 CEST] <JEEB> your FFmpeg seems drunk, or the error is higher there in the list
[16:53:59 CEST] <JEEB> > Could not find tag for codec h264 in stream #1, codec not currently supported in container
[16:54:03 CEST] <JEEB> "pardon my french?"
[16:54:40 CEST] <mknod> I'm french too. Maybe that's why?
[16:54:42 CEST] <JEEB> or the actual error is > Frame rate very high for a muxer not efficiently supporting it.
[16:54:59 CEST] <JEEB> which you could attempt to fix with adding fps filter at the end of the filter chain
[16:55:19 CEST] <JEEB> -vf 'loop=loop=-1:size=1:start=0,fps=24'
[16:55:25 CEST] <JEEB> something like that, I think?
[16:55:49 CEST] <JEEB> if that is valid and doesn't fix it, your FFmpeg is very, very drunk
[16:56:02 CEST] <mknod> Ok, I'm too clueless about ffmpeg and video encoding in general. I'll dig the topic more prior to trying to get a working command.
[16:56:29 CEST] <JEEB> the final error makes no sense and thus I'm trying to troubleshoot
[16:57:34 CEST] <mknod> why I considered providing you the ALAC
[16:57:50 CEST] <JEEB> I have some ALAC files and that really isn't the problem :)
[16:58:08 CEST] <JEEB> the video track is the problem - the mp4 writer for some reason is saying it can't into h264. which is ?!?!?!
[16:58:46 CEST] <JEEB> which made me think the problem was the error before that about the high frame rate that either the input or the filter module end up with (or at least high time base, which is what the error is technically about)
[16:59:02 CEST] <mknod> I'm got to go but will give it another try or ten later for sure
[16:59:10 CEST] <JEEB> sure
[16:59:54 CEST] <mknod> thanks JEEB & kepstin
[17:03:50 CEST] <fling> Can I tell it to buffer the input for a second before doing output?
[17:04:46 CEST] <JEEB> can I ask why you are asking that? what are you attempting to do?
[17:12:48 CEST] <fling> JEEB: I'm trying to stabilize fps in a gstreamer output
[17:12:57 CEST] <fling> gstreamer -> ffmpeg -> rtmp
[17:13:53 CEST] <JEEB> stdin shouldn't find an EOF so the issue is with something else?
[17:14:50 CEST] <fling> I've not tried it yet ^
[17:15:07 CEST] <fling> Not piping from gstreamer yet I mean. Still not figured out how to do so properly :
[17:16:09 CEST] <fling> gstreamer issue is it playing back to fast to a live rtmp output resulting in 1000 fps spikes with couple of 0fps seconds afterwards
[17:16:42 CEST] <JEEB> so what are you using gstreamer for?
[17:16:44 CEST] <fling> ffmpeg issue is my lack of knowledge of how to buffer the input for few seconds before starting to write the output
[17:17:29 CEST] <fling> I'm using gstreamer to workaround ffmpeg's single threaded muxer not working nicely with three or more live inputs/outputs
[17:17:37 CEST] <JEEB> ok
[17:17:52 CEST] <JEEB> so where is the 1000fps coming from then if it's live input?
[17:18:31 CEST] <fling> gstreamer is muxing h264 with aac encoded from pulse
[17:18:46 CEST] <fling> Not sure where the fps spikes are coming from
[17:18:52 CEST] <JEEB> what is your output container from gstreamer?
[17:18:54 CEST] <fling> file output works just fine
[17:19:03 CEST] <fling> flv for rtmp I'm having issues with
[17:25:09 CEST] <another> mknod: JEEB: the ipod muxer supports alac
[17:25:34 CEST] <JEEB> yes, but that isn't the problem at that point when mp4 is rejecting H.264 :P
[17:28:34 CEST] <another> yes. i got that. fps filter
[17:28:47 CEST] <another> however ipod doesn't support h264 :(
[17:30:34 CEST] <another> *sigh* just use matroska i guess
[17:31:52 CEST] <fling> or nut
[17:32:20 CEST] <another> does youtube support as input?
[17:34:16 CEST] <fling> JEEB: fps is not jumping when I'm muxing h264 alone without pulse
[17:34:21 CEST] <fling> JEEB: this also works just fine with ffmpeg btw
[17:34:42 CEST] <fling> pulseaudio sucks! ;D
[17:35:19 CEST] <another> water is wet
[17:40:58 CEST] <durandal_1707> no
[17:41:09 CEST] <durandal_1707> pulseaudio rocks
[17:41:21 CEST] <durandal_1707> you know nothing
[17:41:51 CEST] <dastan> hello people
[17:47:19 CEST] Action: fling does not know a thing
[17:49:19 CEST] <qeed> the pulse of amerika
[18:05:44 CEST] <JEEB> pulseaudio is OK, but like with all APIs  you need to know how2 utilize it
[18:05:54 CEST] <JEEB> all audio APIs tend to be anal about some things
[18:50:48 CEST] <cpplearner> Guys, I want to save a complete list of I-frame timestamps to seek in demand. In this case, which timestamp do I need to save, pts or dts?
[18:53:55 CEST] <Mavrik> dts.
[18:54:07 CEST] <Mavrik> For decoding :)
[18:54:29 CEST] <Mavrik> I-frames will have PTS == DTS anyway
[18:54:36 CEST] <Mavrik> Otherwise they're not I-frames you need :)
[18:57:56 CEST] <DHE> I've seen i-frames with pts>dts
[19:00:16 CEST] <JEEB> yes, with b-frames  you need delay :)
[19:00:19 CEST] <JEEB> for the re-order later
[19:01:17 CEST] <cpplearner> Hmm, can I ask you what ultimately dts != pts mean? I'm a newbie in this field. =/
[19:01:47 CEST] <DHE> When you have B frames (B stands for between), frames in the order I, P, B, P will actually be written to disk in the order 1,2,4,3
[19:02:10 CEST] <DHE> so there's the decode timestamp/order and the presentation/display order
[19:02:49 CEST] <cpplearner> Oh, that's interesting. But, I-frame always holds pts=dts?
[19:05:04 CEST] <DHE> it is required that dts <= pts since you can't show a frame that isn't decoded yet. so sometimes we'll give the first I frame dts=0 even though it's shown at time 1 for ordering and delayed processing reasons
[19:09:29 CEST] <DHE> can't really say more, except I've seen x264 do it, and the x264 people know more about video encoding than I do so I trust them. :)
[19:13:07 CEST] <cpplearner> Thanks for giving me some context. =)
[20:07:33 CEST] <TheDrode> can i add... Don't know 3 seconds buffer
[20:07:40 CEST] <TheDrode> ?
[20:49:14 CEST] <johnjay> does ffmpeg have a way to dump all keyframes from an xvid codec file?
[20:49:29 CEST] <JEEB> what sort of dumping. just the data into a JSON file or?
[20:49:36 CEST] <johnjay> just raw images
[20:49:52 CEST] <johnjay> i want to find an image possibly embedded in a video and only clue i have is it's a scene change
[20:49:57 CEST] <johnjay> which generally is a key frame
[20:50:16 CEST] <JEEB> not sure if ffmpeg.c has the selection based on frame type (only process X), but you definitely can do that with a video filter
[20:50:17 CEST] <furq> -skip_frame nokey -i foo.avi out%03d.png
[20:50:25 CEST] <JEEB> ok, there we go
[20:50:27 CEST] <johnjay> ah thanks furq
[20:50:31 CEST] <furq> or yeah the select filter has frame type selection
[20:50:32 CEST] <johnjay> really i want any scene change but
[20:50:39 CEST] <johnjay> i assume this is good enough
[20:50:40 CEST] <furq> i think it might also have scenecut detection
[20:50:46 CEST] <JEEB> it also has that, yes
[20:50:50 CEST] <furq> https://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect
[20:51:09 CEST] <JEEB> if someone has the time they should compare if scxvid (yes lol) or scenecut has better detection
[20:51:33 CEST] <johnjay> scxvid...?
[20:51:37 CEST] <johnjay> lol
[20:51:41 CEST] <JEEB> scxvid is the classic tool which outputs frame times so that subtitle timers can get scene cut times
[20:52:46 CEST] <JEEB> johnjay: it's still used bv people doing subtitles because its algorithm is better than x264's for scenecut purposes :)
[20:52:59 CEST] <JEEB> basically it's xvid's scenecut algorithm ripped into a separate app
[20:53:13 CEST] <JEEB> you run some raw YCbCr video through it and it outputs a text file
[20:53:35 CEST] <johnjay> i see
[21:36:24 CEST] <der_richter> from my tests the 'scene change detection' was better with scxvid
[21:37:00 CEST] <der_richter> or rather it detected more scenes changes, might have had a bit more false positives though
[21:37:07 CEST] <der_richter> it has been a while sinc ei tested it
[23:16:21 CEST] <rooth> A bit of a dumb question. I've got an mp4-file with 1:audio stream and 1:video stream. The audio starts off fine but gets out of sync the longer the video runs, e.g. good in the start but slowly the audio lags behind. Is there any smart way I can get the video and audio synced? ...
[23:16:26 CEST] <rooth> ... Should I use the -shortest option or something else?
[00:00:00 CEST] --- Sun Sep  1 2019


More information about the Ffmpeg-devel-irc mailing list