[Ffmpeg-devel-irc] ffmpeg.log.20180826

burek burek021 at gmail.com
Mon Aug 27 03:05:02 EEST 2018


[06:13:53 CEST] <craftxbox> how do i get android_camera functionality?
[10:35:49 CEST] <paul_uk> hi all.  I'm hoping someone can help.  I'm trying to simulate a distributed encoding setup.  where a video file gets uploaded, split into N segments.  Then each worker gets a segment, transcodes it into various formats and then sends it elsewhere to be put back together.
[10:37:46 CEST] <paul_uk> The problem I am having, is that I can split, transcode and stitch back the video.  But whenever I play back the file, when I get to a join.  I notice there is a slight millisecond delay, like the video jumps from 1 segment to the next.  So if I have a 60 minute video with 10 segments.  Every new segment there is a jump.
[10:37:57 CEST] <paul_uk> Here is my ffmpeg cli:  https://pastebin.com/raw/vnZ9xTk2
[10:38:15 CEST] <paul_uk> If anyone can let me know what I'm doing wrong, I'd appreciate it.  I'm also very new to video encoding as well.
[10:38:46 CEST] <_cfr> paul_uk: try -copyts
[10:39:44 CEST] <paul_uk> _cfr: thanks for taking a look.  where do I apply -copyts ?
[10:39:58 CEST] <paul_uk> in the initial split?
[11:19:06 CEST] <paul_uk> So I did ffmpeg -i test.mp4 -vcodec mpeg4 -f segment -segment_time 600 -c copy -map 0 -copyts test_%03d.mp4 when doing the split.  I still hear that slight pause when viewing the video.  I'm using ffmpeg 4.0.2.  I see lots of "Non-monotonous DTS in output stream 0:0; previous: 7303168, current: 7303168; changing to 7303169. This may result in incorrect timestamps in the output file."   When I split the file.  I have
[11:19:06 CEST] <paul_uk>  done a lot of searching on google and all the answers say that it's fixed.   Is this the cause of the issue?
[11:51:25 CEST] <th3_v0ice> paul_uk: While joining try to set -reset_timestamps 1 after -i and join the video. Maybe it will resolve the issue.
[11:52:04 CEST] <paul_uk> th3_voice: thanks I will try that
[11:52:20 CEST] <furq> are you sure you want to use mpeg4
[11:55:15 CEST] <paul_uk> I am brand new to all this.  I'm using a mp4 source file.  So I don't know what's best to use :)
[11:55:43 CEST] <furq> just omit -vcodec entirely and it'll use the default
[11:55:50 CEST] <furq> which is normally x264, which is much better
[11:56:00 CEST] <paul_uk> ok thanks
[11:56:11 CEST] <furq> idk if that'll fix the issue but if nothing else the video will look much better
[13:31:37 CEST] <paul_uk> So I have gotten to the bottom of the slight delay in sound.  If I use -codec:a copy then I have no problems.  I dont even need to use -fflags +genpts -reset_timestamps 1 when rejoining the segments.  However once I use libfdk_aac or aac.  I get that noticable delay.   Any ideas on how to resolve this?  Thanks
[13:32:40 CEST] <paul_uk> the command with no issues is:  ffmpeg -i test_000.mp4 -codec:v libx264 -codec:a copy -threads 6 -f mp4 changed_0.mp4
[16:31:51 CEST] <sircmpwn> if I'm using -f concat and -vf drawtext[...] can I draw the filename concat is currently processing with drawtext?
[16:32:45 CEST] <JEEB> the filter most certainly has no idea of the input lavf stuff
[16:32:49 CEST] <DHE> sounds like you might want to do something with the concat filter instead
[16:33:18 CEST] <DHE> a series of "movie=...,drawtext=...[video1]; ... ; [video1][video2]... concat=... [output]" filters
[16:33:26 CEST] <sircmpwn> hmm
[16:33:35 CEST] <sircmpwn> that's an interesting thought
[16:33:41 CEST] <sircmpwn> thanks
[16:33:43 CEST] <DHE> it's gonna be a bit more complicated than that but that'll get the ball rolling
[16:34:03 CEST] <DHE> unless you want to buld a timeline and have drawtext change texts at exactly the cut points
[16:34:32 CEST] <sircmpwn> I think what I want to do is live with not having the filename shown on the video
[16:36:40 CEST] <DHE> that is super-effective as well... :/
[16:42:16 CEST] <paul_uk> This seriously bites.  No matter what I try.  When I split a file into segments, change the codec and put it back together again.  I get a slight pause on the sound where the join is.  I read on SO that -reset_timestamps 1 only works with .ts.  So I changed to that.  Made no difference.  This is a known probably because on github when other devs attempt to use distributed encoding.  They all complain about this issue
[16:42:16 CEST] <paul_uk>  too.  Any other ideas anyone?
[16:43:29 CEST] <paul_uk> Oh and the join process doesn't matter.  i tried MP4Box as well and it had the same issue.  So it's down to the transcoding of the individual segments.
[16:43:30 CEST] <DHE> have you tried options like -copyts ?
[16:43:56 CEST] <paul_uk> DHE: This is my segment command:  ffmpeg -i test.mp4 -f segment -segment_time 150 -c copy -map 0 -copyts -muxdelay 0 -reset_timestamps 1 test_%03d.ts
[16:44:18 CEST] <paul_uk> my transcode command is: ffmpeg -i "$i" -codec:v libx264 -codec:a aac -threads 6 "t_${name}.mp4"
[16:44:56 CEST] <DHE> and copyts on the transcode command as well?
[16:45:13 CEST] <paul_uk> no.  let me give that a try now
[16:45:15 CEST] <DHE> also keep using .ts files?
[16:45:36 CEST] <DHE> you can probably just "cat *.ts | ffmpeg -i - -c copy -movflags faststart output.mp4" to merge it
[16:45:54 CEST] <paul_uk> My intention is to use the files for a VOD service.  What's best practice?  Combine to mp4 or keep as ts?
[16:46:56 CEST] <DHE> depends on the player. mp4 has more universal support but you'll need the -movflags faststart parameter to make it streaming-friendly
[16:47:40 CEST] <paul_uk> understood thanks
[16:47:57 CEST] <DHE> I'm making this up as I go based on what I know, so... take that for what it is...
[16:48:56 CEST] <paul_uk> it's ok.  it's really my first foray into ffmpeg.  i really just want to get a working prototype on the process and then when that's down, built out the infrastructure to handle encoding at scale.  which ironically is the easy part for me.
[16:49:46 CEST] <DHE> you've really gone into the deep end for a first project...
[16:58:39 CEST] <paul_uk> DHE: unfortunately i still have gaps.  Here's the CLI that I'm using.  https://pastebin.com/raw/sXPaX4Lu    The issue is that when I use -codec:a copy then I have no issues at all.  So why the issue when I define an audio codec?  I don't understand.
[17:02:04 CEST] <DHE> two thoughts occur. the first is that reset_timestamps doesn't sound like a good idea here. the second is that AAC encodes in fixed increments of 1024 samples so there may be an issue with cutting on any non-1024 multiple depending on the properties of the source
[17:02:52 CEST] <JEEB> also if the segments are encoded separately with audio then you will have initial delay which depending on how things process it might not be invisible for you :P
[17:03:07 CEST] <JEEB> the "encoder delay"
[17:03:20 CEST] <DHE> it might be worth encoding the audio entirely separately and merging it in later
[17:03:47 CEST] <JEEB> you're not usually speed-limited with audio :P
[17:03:58 CEST] <JEEB> which is why you'd want to split video
[17:04:07 CEST] <DHE> as an aside, I'm curious about the specifics of your VOD platform and your transcoder capacity. if you're doing an initial conversion of 10,000 items but you don't have well over 10,000 machines doing the work, is this really helping?
[17:04:28 CEST] <DHE> kind of a "premature optimization" thing
[17:06:41 CEST] Action: DHE is actually preparing for something similar so this is on my mind.
[17:07:29 CEST] <paul_uk> Trust me.  I'll be at the place where a million items are present and 10k machines are doing the work.  I dont have issues getting users for the platform.
[17:08:18 CEST] <paul_uk> My reach when it comes to marketing.  I could have an email land in 5m inboxes tomorrow if I chose to and no, that's not spamming.  All done via influencers :)
[17:10:09 CEST] <paul_uk> But back to the original issue.  I assume I'll have to split the video and audio into different files and then split each into their own segments.  Transcode each distinct group and then rejoin each type and finally combine the two files again into one?
[17:10:20 CEST] <DHE> but that's still 100 jobs per machine, not 100 machines per job...
[17:11:29 CEST] <paul_uk> Ultimately, it's really about getting the process down.  Once I'm happy with it, then I tackle any challenges that come up.  I know full well, you can be prepared as possible and then a wrench comes in that you didn't prepare for.
[17:48:18 CEST] <Ke> in SwsFunc ff_yuv2rgb_get_func_ptr(SwsContext *c) in libscscale there seems to be special cases for accelerating x86 and ppc, though aarch64 dir seems to have some code implementation
[17:48:53 CEST] <Ke> is seems to me I could just copy x86 line and replace the function names with aarch64 function names
[17:53:33 CEST] <Ke> I guess I'll just compile and see what happens
[18:34:56 CEST] <JEEB> Ke: if there's any aarch64 stuff there the function pointers should be set there during runtime
[18:35:11 CEST] <JEEB> that's how the SIMD optimizations work in FFmpeg in general
[18:35:28 CEST] <JEEB> also that's an internal function that should be internal to that library as far as I can tell
[18:35:41 CEST] <Ke> yes it is
[18:36:25 CEST] <JEEB> yeh, there's ff_sws_init_swscale_aarch64
[18:36:35 CEST] <JEEB> which checks if the thing has NEON
[18:36:48 CEST] <JEEB> and if it has, and depending on some parameters it sets the function pointer
[18:37:17 CEST] <JEEB> and then swscale.c in libswscale/ calls that aarch64 func
[18:37:21 CEST] <JEEB> probably under #ifdef AARCH64
[18:37:22 CEST] <JEEB> or something
[18:37:37 CEST] <JEEB> yes, ff_getSwsFunc
[18:37:49 CEST] <JEEB> if (ARCH_AARCH64)
[18:37:58 CEST] <JEEB> so I don't think you need to modify anything to get that stuff used?
[18:40:41 CEST] <JEEB> or did I misunderstand something? :D
[18:48:35 CEST] <Ke> hmm, apparently I missed the function name, though color space transform stuff is at least partially there in aarch64 as well
[18:49:44 CEST] <JEEB> I think all the function pointers should be set during init or so, so if there's aarch64 optimizations they should already be in use
[18:56:23 CEST] <Ke> I guess the color conversion is somehow integrated into scaling?
[18:56:45 CEST] <Ke> otherwise this naming scheme is very confusing
[19:03:04 CEST] <JEEB> yea, swscale does both with sws_scale
[19:04:36 CEST] <Ke> in this case ff_sws_init_swscale_aarch64 is triggered for aarch64 code
[19:04:57 CEST] <Ke> which does not have the aarch64 code
[19:14:27 CEST] <paul_uk> I think I have finally fixed my issue.  I need to do a bit more testing because I have loads of tabs open with ffmpeg commands everywhere lol.  But I now have a joined sample where there are no gaps.
[19:14:44 CEST] <paul_uk> From an SO answer:  DCT-based audio codecs like MP3, AAC rely on neighbouring audio frames for decoding. At the start of the stream, there's a priming frame which serves that purpose. It has a negative timestamp, so during concat, its TS clashes with the final packets of the preceding file and it gets dropped by concat. PCM is self-contained for decoding, so doesn't suffer from this.
[19:15:09 CEST] <paul_uk> goes a bit over my head, but makes sense.  When I use flac for the audio codec.  Works just fine.
[19:15:27 CEST] <JEEB> paul_uk: I'm pretty sure I mentioned priming samples/encoder delay :P
[19:15:40 CEST] <JEEB> also FLAC also has it but I bet the decoder just hard-codes what libflac does
[19:15:47 CEST] <JEEB> or I would bet FLAC also has it
[19:15:53 CEST] <JEEB> if it doesn't, sure
[19:16:43 CEST] <paul_uk> Yes.  But I'm day zero here and this is my first time doing anything video or audio related.  Have a lot to learn.  So I'm feeling around as I experiment.  Still more than appreciative for the advice everyone has given so far.
[19:58:42 CEST] <DHE> paul_uk: distributing the video encoding but doing the audio encoding at once on the spot is probably still worth it overall. my PC can do 20x realtime encoding to AAC per CPU core
[20:27:23 CEST] <qxt> I am streaming video and udp works just fine ie ffmpeg -i state.mp4 -f mpegts "udp://127.0.0.1:1234" but when I try tcp like ffmpeg -i state.mp4 -f mpegts "tcp://127.0.0.1:1234" I get a "Connection refused"
[20:28:01 CEST] <qxt> any clue what that is about. I am using Debian 9 GNU/Linux with ffmpeg from the repo
[20:28:17 CEST] <qxt> Debian stable that is
[20:29:24 CEST] <qxt> Everything is local and there are no iptables/firewalls in the way
[20:30:10 CEST] <JEEB> pretty sure it doesn't listen but attempts to push to that :P
[20:30:35 CEST] <JEEB> udp is kind of special because it has the concept of just pushing stuff into the ether
[20:32:07 CEST] <qxt> JEEB, yeah I am listing with ffplay on ffplay tcp://127.0.0.1:1234
[20:34:15 CEST] <qxt> JEEB, udp works fine. Have even listened on 127.0.0.1:1234 transcoded and dumped the output on port 60006
[20:35:14 CEST] <qxt> what I am wondering is if there was something that should have been compiled into ffmpeg that is missing....
[20:36:13 CEST] <JEEB> nope, you're just either doing something else than you think you're doing or there's a boog somewhere
[20:36:36 CEST] <JEEB> I would bet on the first part but you'll have to figure it out yourself. I haven't poked at the tcp protocol at all in lavf :P
[20:39:46 CEST] <qxt> What would be the most universal way of steaming audio and video any browser will just work with? x246 and aac ?
[20:41:03 CEST] <qxt> When I say work with I mean as in html5 and nothing else installed.
[20:44:44 CEST] <qxt> If somebody has a recent version of ffmpeg and they try ffmpeg -i someVideo.mp4 -f mpegts "tcp://127.0.0.1:1234" and see if it spontaneously crashes like the Debian version in stable.
[21:00:53 CEST] <paul_uk> qxt: I'm finished for today.  but im running ubuntu and i compiled 4.2.0.  i can give it a try tomorrow if you like.
[21:02:04 CEST] <qxt> paul_uk, thx
[21:04:11 CEST] <paul_uk> I want to do the same as you.  stream video with any browser.  but i thought hls was the best way.  I'm trying to emulate wistia in this respect:  https://wistia.com/support/getting-started/export-settings
[21:05:14 CEST] <qxt> JEEB, how many years have you been hanging out here? IIRC you helped me out with something back in 2010-ish with some guy called pastryeater
[21:06:43 CEST] <JEEB> probably ever since circa some time after 2008
[21:06:51 CEST] <JEEB> since I first entered around #x264 in 2008
[21:06:55 CEST] <qxt> dang...
[21:07:10 CEST] <qxt> that guy pastyeater still hang out here?
[21:07:31 CEST] <JEEB> no idea
[23:21:22 CEST] <qxt> is the "strict experimental" still needed? This is from 2012   -acodec aac -strict experimental -ar 44100 -ac 2 -b:a 96k
[23:21:43 CEST] <Cracki> aac isn't experimental anymore
[23:21:59 CEST] <qxt> thx
[23:28:21 CEST] <qxt> hacked this together. Seems to work ok. Anybody see anything wrong with it? Going to use this to transcode and stream video.
[23:28:23 CEST] <qxt> ffmpeg -i state.mp4 -f rtsp -rtsp_transport tcp -c:a aac -ar 44100 -ac 2 -b:a 96k  -c:v libx264 -crf 23 -maxrate 1M -bufsize 2M  rtsp://localhost:8888/live.sdp
[23:47:02 CEST] <analogical> when a movie on a Blu-ray ís spread out on several .m2ts files how do I create one single .mkv file from all those files?
[23:47:34 CEST] <JEEB> either use libbluray if your FFmpeg has that, or concatenate the m2ts files one after another with cat or so
[23:47:37 CEST] <Cracki> possibly "concat" input protocol
[23:47:59 CEST] <Cracki> cat is a waste of space if ffmpeg can ingest the files concatenated
[23:47:59 CEST] <JEEB> libbluray can basically read the playlists that blu-rays have
[23:48:16 CEST] <JEEB> Cracki: cat as in `cat 1 2 3 | ffmpeg -i input`
[23:48:21 CEST] <JEEB> not outputting into a file first
[23:48:37 CEST] <JEEB> argh, not -i input , -i -
[23:48:43 CEST] <JEEB> since "-" is stdin or stdout
[23:49:13 CEST] <furq> analogical: mkvmerge can read mpls playlists
[23:49:25 CEST] <Cracki> I had bad experiences with piping stuff around. ffmpeg sometimes needs to seek and stdin doesn't do that
[23:49:36 CEST] <Cracki> look at the "concat" input protocol
[23:49:36 CEST] <JEEB> with mpeg-ts you shouldn't have that
[23:49:50 CEST] <Cracki> https://trac.ffmpeg.org/wiki/Concatenate
[23:49:55 CEST] <JEEB> but to be honest the libbluray input protocol would be my recommendation
[23:49:57 CEST] <JEEB> or what furq noted
[23:50:02 CEST] <Cracki> ffmpeg -i "concat:input1.ts|input2.ts|input3.ts" -c copy output.ts
[23:50:05 CEST] <JEEB> because both can read the playlists and provide the input
[23:50:13 CEST] <JEEB> also the playlist files contain the language info etc
[23:50:15 CEST] <furq> if you're just remuxing then mkvmerge is your best bet because it'll keep chapters etc
[23:50:19 CEST] <Cracki> ^
[23:50:58 CEST] <JEEB> yea, I think libbluray input protocol in FFmpeg does the same, so it's up to what one prefers
[23:51:37 CEST] <furq> neat
[00:00:00 CEST] --- Mon Aug 27 2018


More information about the Ffmpeg-devel-irc mailing list