[Ffmpeg-devel-irc] ffmpeg.log.20140726

burek burek021 at gmail.com
Sun Jul 27 02:05:01 CEST 2014


[00:02] <zybi1> Unknown encoder 'libfdk_aac' ??
[00:02] <c_14> You don't have libfdk_aac compiled in.
[00:11] <zybi1> how can i re-encode from flac to wav
[00:11] <c_14> ffmpeg -i flac wav
[00:12] <c_14> ffmpeg -i blah.flac blah.wav
[00:12] <zybi1> i mean within video reencoding sorry
[00:13] <zybi1> ffmpeg -i INPUT.mkv -profile:v baseline -level 3.0 -c:v libx264 -preset slow -crf 18 -c:a  output.mp4
[00:13] <c_14> wav is a format, not a codec
[00:14] <llogan> i doubt you can put most? pcm ("wav") in mp4.
[00:19] <zybi1> so what would be the command line then?
[00:20] <zybi1> I rebuilt ffmpeg and now the man-page is all messed up
[00:24] <llogan> zybi1: what audio format do you want to use?
[00:24] <llogan> or do you want to just re-mux it?
[00:25] <zybi1> actually i wanna make a mac/apple compliant video file not too large
[00:25] <zybi1> from a mkv 70mbit lossless with flac audio
[00:38] <c_14> You'll probably want to use x264 and aac (if you're fine with lossy audio). If you know how big "not too big" is you can use 2-pass encoding.
[00:40] <zybi1> but i got this ac-3 library problem ffmpeg being not built with it
[00:41] <c_14> ac3 or aac, there's a difference
[00:42] <llogan> zybi1: which devices do you want to support, exactly?
[00:42] <zybi1> any kind of mac osx system
[00:42] <zybi1> or the most possible
[00:42] <llogan> also, when you encounter an issue you should always include a link to pastebin showing your command and complete console output
[00:45] <zybi1> llogan: http://www.pasteall.org/53077
[00:55] <llogan> zybi1: your build does not support the encoder libfdk_aac. use "-c:a aac -strict experimental" instead.
[00:56] <llogan> also add "-pix_fmt yuv420p" as an output option since Apple stuff needs 4:2:0
[01:15] <zybi1> thanks llogan
[01:33] <cbrugh> hello, we have an issue with running the following command... on freebsd 8 machine this command runs without problems, on freebsd 9.1 this command generates the errors below.  We are using ffmpeg 1.2.7 on both servers with libx264 compiled in examply the same.
[01:33] <cbrugh> ffmpeg -i 'in.mp4' -c:v libx264 'out.mp4'
[01:34] <cbrugh> [h264 @ 0x803c9d120] Cannot use next picture in error concealment
[01:34] <cbrugh> [h264 @ 0x803c9d120] concealing 3454 DC, 3454 AC, 3454 MV errors in P frame
[01:34] <cbrugh> get those errors over and over on the fbsd 9.1 machine
[01:35] <relaxed> 1.2.7 is fairly old. which version is now in /usr/ports ?
[01:35] <cbrugh> relaxed: well on the fbsd 9.1 machine I did install the latest 2.3
[01:35] <cbrugh> same results
[01:36] <cbrugh> on the fbsd 8 machine this command runs just fine... I have the same compile and everything between the two
[01:36] <relaxed> you compiled it yourself?
[01:36] <cbrugh> yeah
[01:37] <relaxed> how large is in.mp4?
[01:37] <cbrugh> 31MB
[01:38] <relaxed> can you put it up somewhere?
[01:38] <cbrugh> sure
[01:39] <cbrugh> http://www.drivetrainspecialists.com/in.mp4
[01:39] <relaxed> pastebin the ffmpeg command and all console output too
[01:40] <cbrugh> pastebin console output: http://pastebin.com/5pBsY4GQ
[01:40] <cbrugh> you can download the mp4 with the link above
[01:42] <relaxed> So you're saying ffmpeg exits with a nonzero exit status?
[01:43] <relaxed> I'm running the same command and it's dropping a shit ton of frames.
[01:45] <relaxed> but its exit status is zero
[01:45] <relaxed> cbrugh: ^^
[01:49] <trn> If anyone wouldn't mind looking at my info and question, it's on pastebin: http://goo.gl/m3p9Gs ... need guidance on DTS/PTS stuff.
[01:49] <trn> It's the one part of the application prototype that isn't working right.  I've already went through and successfully converted most of it to C already actually.
[01:52] <trn> BTW, I give the ffmpeg props, the code actually is pretty decent where it matters and the libraries are great.
[01:53] <trn> I actually tried to use libav stuff and had enough random issues over three days to never want to touch it again.
[01:55] <relaxed> It would probably help if you could isolate your problem to something smaller than a 1000 line pastebin post.
[01:55] <trn> relaxed: The first 780 lines are configuration and such beyond the first block of text.
[01:56] <relaxed> widdle it down to something small and repeatable
[01:57] <trn> relaxed: It's just a small part of a very large clustered application that runs on several clusters of thousands of nodes and can have 5000 or more ffmpeg library functions active at a time, and we produce 300MB of logs per hour.
[01:58] <cbrugh> relaxed: let me run it all the way thru
[01:58] <trn> relaxed: It really is the smallest I can do, it's more a lack of my knowledge of what to do, it isn't a problem per-se. :)
[01:59] <llogan> i don't have any answers, but line 785 looks bad. (using cpu capabilities: none!)
[01:59] <trn> relaxed: Lines 728 through 1020 is the absolute repeatable issue.
[01:59] <relaxed> cbrugh: before you do that, I recommend you install the latest version through /usr/ports and have it use gcc-4.8 as the compiler.
[02:00] <trn> llogan: Yeah, that is something to do with the virtualization but it isn't a problem, just a performance issue.
[02:00] <cbrugh> relaxed: ok
[02:01] <trn> llogan: I just need to know how to have the reading processes that take streams as input not drop frames up to the point where the last decoding stopped.
[02:03] <trn> [videosource] -> [processing/splitting] -> [decode and normalize to 2 stream raw yuv4 and 2-ch PCM] -> [split again] -> cluster nodes receiving here ...
[02:03] <trn> The nodes then encode to attached client preferred sources, and encoders and initialized and destroyed based on consumers attached.
[02:04] <trn> The problem is when videosource is swapped to a new stream without closing the chain.
[02:04] <trn> The raw/PCM muxed as NUT has a timestamp which is always increasing, lets say the original stream stops at frame 2000.
[02:05] <trn> The encoders receiving mux'd NUT data will drop all frames until frame 2001.
[02:05] <trn> So I really need to rewrite the timestamps in the libav* receiving end.
[02:07] <trn> Like if the new source PTS starts over, then rewrite as current PTS = previous input PTS+current PTS.
[02:08] <trn> So it'll see frame 4001 and immediately begin decoding again, instead of dropping 2000 frames first since they are read at framerate.
[02:08] <trn> Am I making any sense?
[02:08] <llogan> i don't know how to do that.
[02:09] <trn> I also have some technical questions on the nut container and I'm hoping someone knows rather than having to fully understand the sources :)
[02:09] <trn> Because I'm using the ffmpeg included nut but I also have mplayer/ffmpeg libnut available, and was wondering if one implementation or another is preferred or more robust or if they are equivilant.
[02:09] <relaxed> trn: It sounds like you need https://www.ffmpeg.org/consulting.html
[02:10] <trn> relaxed: You are probably right, heh. :)
[02:11] <trn> Probably more than the standard end-user question, I know.
[02:11] <relaxed> It never hurts to try :)
[02:13] <trn> relaxed: full logs from the stream processing part of the application can be 300MB or more per hour of live input across all nodes too.
[02:14] <trn> So it's not practical to post everything, but I just hope I can be believed when I say this is the subset where I don't know how to adjust the timestamps as I would need to :)
[02:15] <trn> And I managed to turn it into two concurrent standard ffmpeg commands attached via named unix pipes with exact same behavior as the application.
[02:15] <trn> So it's not a ffmpeg issue, I just don't know how to do what I need to do :)
[02:16] <trn> And I don't want to tell the boss I've got it 98% working, now pay someone real money for the last 2%.
[02:20] <trn> I did have it fully working using a homebrew messaging API using segmented MP4 as the transport standard, and using some propriatary vendor-not-to-be-named libraries.
[02:21] <trn> It just ended up having very poor performance when it came to error handling, which would cause audio/video desync issues when running for long periods.
[02:22] <trn> The ffmpeg timestamp stuff is much better than the commercial offering, fyi.  The problem here is it is *too* conservative.
[02:22] <trn> And assumes output DTS simply can't be < encoded output DTS, except in this case it can be under certain circumstances.
[02:23] <trn> sorry input DTS can't be less than encoded output DTS.
[02:24] <trn> Also, a certain other vendor uses 32-bit integers in places that have no business using them, causing overflow issues after a few days.
[02:25] <trn> And random memory leaks that I bet they never noticed because most people use the encoder for 2-4 hours of input, not 200-400 hours.
[02:26] <trn> And things like tables that just keep growing for things like indexes that we never use and no public way to enable them.
[02:26] <trn> When I get his working I'm going to donate to ffmepg for sure, for what its worth.
[02:26] <trn> *disable them.
[02:26] <trn> anyway blah
[02:50] <msuth> Hi guys i have 1 question
[02:52] <msuth> i have url where it has different bitrate how to download the 1 only i want
[02:52] <msuth> it has 1080P, 720P, 480P
[02:53] <msuth> how to select 720P to download only is always downloading the 1080P
[02:53] <sacarasc> This isn't an ffmpeg question?
[02:54] <msuth> ya i am downloading m3u8 with ffmpeg
[02:54] <sacarasc> m3u8 files are plain text, remove the streams you don't want.
[02:54] <msuth> ffmpeg -i url -c copy output
[02:56] <msuth> i tired that but i couldn't load the m3u8 after saving it to my computer
[02:57] <msuth> so i was thinking when getting the m3u8 from the url can i insert something to download only 1 bitrate
[02:58] <sacarasc> Download the playlist, take the stream URL from it, use that directly?
[03:00] <trn> OK, I'm actually working with m3u8 and rewrote the whole ffmpeg code for that yesterday :)
[03:00] <trn> sacarasc: ffmpeg already enumerates them as different streams and they act normally.
[03:00] <trn> Like if you don't specify which stream, it falls back on the standard ffmpeg heuristics.
[03:00] <msuth> yea tru TRN
[03:01] <msuth> how can i specify stream
[03:01] <msuth> or program ID
[03:01] <trn> sacarasc: Use standard -map
[03:02] <msuth> can you please specify this is
[03:02] <msuth>  Duration: 02:06:21.00, start: 0.700000, bitrate: 0 kb/s
[03:02] <msuth>  Program 0
[03:02] <msuth>    Metadata:
[03:02] <msuth>      variant_bitrate : 3560000
[03:02] <msuth>    Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1920x1080
[03:02] <msuth> [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
[03:02] <msuth>    Metadata:
[03:02] <msuth>      variant_bitrate : 3560000
[03:02] <msuth>    Stream #0:1: Audio: aac ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 51
[03:02] <msuth> kb/s
[03:02] <msuth>    Metadata:
[03:02] <msuth>      variant_bitrate : 3560000
[03:02] <msuth>  Program 1
[03:02] <msuth>    Metadata:
[03:02] <msuth>      variant_bitrate : 1640000
[03:02] <msuth>    Stream #0:2: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720
[03:02] <trn> I don't know the cli, but in av+_find_best_stream is the function if you want to see the heuristics.
[03:02] <trn> hmmm lemme think for a sec
[03:03] <msuth> see it has like program 0 and 1
[03:03] <msuth> program 0 is 1980 and program 1 is 1270
[03:03] <msuth> 1920**
[03:03] <msuth> 1280**
[03:03] <trn> Not sure how to do it via cli, but I can check.
[03:04] <msuth> please if you have time
[03:04] <msuth> i'll wait here
[03:04] <trn> msuth: I actually use Trinity Broadcasting's headend streams like http://acaooyalahd2-lh.akamaihd.net/i/TBN01_delivery@186239/master.m3u8 for my testing.
[03:04] <msuth> i tired google last nite full couldnt make it to work
[03:04] <trn> msuth: Check out how many they provide :)
[03:05] <trn> msuth: I recently wrote a module that lets you specify multiple segment outputs as the same m3u8 playlist, and 1 input.
[03:05] <msuth> [http @ 0000000002c8f220] HTTP error 404 Not Found
[03:05] <msuth> http://acaooyalahd2-lh.akamaihd.net/i/TBN01_delivery@186239/master.m3u8 -c copy
[03:05] <msuth> nallun2014.mp4: Input/output error
[03:06] <msuth> i am getting error with that url u provided
[03:06] <trn> So instead of stream_high and stream_low m3u8's you will get stream.m3u8 which uses the correct hints for clients that support adaptive streaming.
[03:08] <trn> msuth: hmm... what happens if you just do ffmepg -i and that URL?  I know it will take awhile to sync but it does....
[03:10] <msuth> just getting error
[03:10] <msuth> but it has program ID up to 11
[03:11] <msuth> 320kb/s audio file
[03:13] <trn> I just did a ffmepg -i url -c copy out.mkv and it works
[03:15] <msuth> it works but how to download just 1 of the files inside
[03:15] <msuth> ?
[03:15] <trn> Relevant was Stream mapping:                                                                                                                           ·····
[03:15] <trn>   Stream #0:24 -> #0:0 (copy)                                                                                                             ·····
[03:15] <trn>   Stream #0:1 -> #0:1 (copy)                                                                                                              ·····
[03:16] <trn> soooo, lets see ...
[03:17] <msuth> ok i got the map option to work
[03:17] <msuth> thanks buddy
[03:17] <msuth> and 1 more question when downloading with the m3u8 i am getting 103fps
[03:17] <msuth> how to increse
[03:17] <msuth> i have 17 4770
[03:17] <msuth> 32gb ram
[03:18] <trn> Example: ffmpeg -i http://acaooyalahd2-lh.akamaihd.net/i/TBN01_delivery@186239/master.m3u8 -map 0:18 -c:v copy·····
[03:18] <trn>  -map 0:32 -c:a copy test.mkv
[03:18] <trn> works here
[03:18] <sacarasc> Downloading usually bottlenecks at your connection speed.
[03:18] <trn> sorry for my paste issues there :)
[03:18] <trn> But yeah, the programs aren't exposed via separate enumeratation in the ffmepg cli.
[03:19] <trn> It's just sequential streams starting 0 through the end.
[03:19] <msuth> ya
[03:19] <msuth> i did like this
[03:19] <trn> The C API is nicer honestly :)
[03:19] <msuth> ffmpeg -i "url" -map 0:4  -map 0:5 -c copy un2014.mkv
[03:19] <trn> But yeah, m3u8 handling in ffmpeg is fantastic compared to that other inferior project :)
[03:19] <msuth> but is slow downloading
[03:20] <msuth> i have 1gbps line dedicated
[03:20] <msuth> and is slow downloading
[03:20] <sacarasc> Could be rate limited at the other end.
[03:20] <trn> msuth: It might be just the pipe inbetween, also it could be ffmpeg too.
[03:20] <trn> msuth: Is this a full m3u8 with all segments or a window-only live file?
[03:21] <msuth> huh i dont get it
[03:21] <trn> With the windowed file, that's normal, it'll grab as many segments as possible then refresh the m3u8 and when new segments are added it'll grab them.
[03:22] <trn> The only performance increase I've found possible is using HTTP keepalive to the server, so when you pull down the m3u8 you aren't having to open a new connection every time.
[03:22] <msuth> is there ffmpeg multithreaded version
[03:22] <trn> I'm not sure if ffmpeg does this or not.  Default ffmepg is threaded, if you add -threads 0 it will use as many as should be optimal.
[03:23] <msuth> oh k
[03:23] <msuth> kool
[03:24] <trn> When I first read m3u8 docs I wanted to hang myself.
[03:24] <trn> But honestly it's one of the better hacks to come out of Apple.
[03:24] <msuth> lol
[03:24] <msuth> have u tired adobe-hds
[03:26] <trn> I'm only famaliar with streaming MP4 files and HLS myself, really.  And I only started even caring about this stuff last week. :)
[03:26] <trn> I do have tons of past experience with telephony stuff, so it's all vaguely related.  :)
[03:28] <msuth> thanks bro iam off now
[03:28] <trn> heh
[03:28] <trn> at least I'm good for something
[03:39] <trn> I even wrote a quick script that lets me use the VPN function of an iPhone to connect to my network, and I use mitmproxy for ssl unwrapping and transparent squid for HTTP...
[03:40] <trn> intercept HLS requests and any associated AES keys, and then I can get that into ffmpeg to save the streams to disk.
[03:41] <trn> So when I want to watch sports, I turn on the VPN, start the app (since every app that does streaming in the App Store is HLS), then just confirm it's recording to disk and watch it later.
[03:41] <trn> Or send it to Chromecast where the original apps don't support it yet. :)
[04:58] <trn> Lovely.  I thought I broke something on my server, it's in Dallas, TX.  Getting 8K/s at best.
[04:58] <trn> But no, ran a speedtest using another two different Dallas servers and I'm getting 8-14K/s.
[04:59] <trn> On my LTE or a non-Dallas server, 10+Mbps.
[05:00] <trn> Actually getting 108 kbps down, 382 kbps up.  Lovely.  That means I go to sleep now.
[05:08] <trn> yay 75% packet loss at mai-b1-link.telia.net
[05:10] <sythe> Hey
[05:10] <sythe> I have an MP4 with audio
[05:10] <sythe> And an MP3 with different audio
[05:10] <sythe> How can I combine them all into one MP4 with both audio channels?
[05:21] <rcombs> ffmpeg -i in.mp4 -i in.mp3 -map 0 -map 1 -codec copy out.mp4
[05:23] <sythe> I think I tried that
[05:24] <sythe> ffmpeg -i episode-test-4.mp4 -i episode3-final-voiceover.mp3 -map 0 -map 1 -codec copy episode2.mp4
[05:24] <sythe> I tried that
[05:24] <sythe> It didn't work
[05:24] <sythe> rcombs: ^
[05:24] <sythe> It only kept one audio stream
[05:24] <rcombs> pastebin the console output?
[05:24] <sythe> I mean, it produced an mp4
[05:24] <sythe> It just only had the original mp4's channel
[05:25] <rcombs> (keep in mind, "stream" and "channel" are different things; here, we want to add an audio *stream*)
[05:26] <sythe> IDK
[05:26] <sythe> I have two audio sources, then
[05:26] <sythe> One with a voiceover, one with background sound
[05:26] <sythe> The one with background sound is already part of the video file
[05:26] <rcombs> again, I need to see the console output
[05:27] <sythe> Ok, pastebinning
[05:27] <sythe> Once it finishes
[05:27] <sythe> Since I've rebooted since the first time
[05:28] <sythe> https://www.irccloud.com/pastebin/TCEsmUml
[05:28] <sythe> rcombs: ^
[05:29] <rcombs> sythe: that definitely shows the audio streams from both files being merged into the output. Did you mean that you wanted to mix the two streams into a single one?
[05:30] <sythe> rcombs: Whichever allows playback to play both streams at once, lol
[05:30] <sythe> When I open it in VLC, I only hear the original audio from the source video
[05:31] <rcombs> sythe: alright, then you want the "amix" filter
[05:31] <rcombs> sythe: you just want this for VLC playback, not for distribution or mobile device playback, yeah?
[05:31] <sythe> I want to upload it to Youtube
[05:31] <rcombs> ah, gotcha
[05:31] <sythe> :)
[05:32] <sythe> rcombs: So, what command?
[05:32] <rcombs> what audio codecs does your ffmpeg build support?
[05:32] <sythe> It says right there in the pastebin
[05:32] <sythe> Doesn't it?
[05:32] <sythe> :P
[05:32] <rcombs> ah, I forgot the configure header :D
[05:32] <rcombs> just a sec
[05:33] <rcombs> ffmpeg -i episode-test-4.mp4 -i episode3-final-voiceover.mp3 -map 0:0 -vcodec copy -filter_complex '[0:1][1:0]amix[out]' -map '[out]' -acodec libmp3lame -ab 320k episode2.mp4
[05:33] <rcombs> ^ the audio quality might be able to be improved a bit, but that should definitely work with YouTube, at least
[05:34] <sythe> Thanks :)
[05:35] <rcombs> what we're doing there is passing the two audio streams through the `amix` filter and encoding as MP3, but copying the input video untouched
[05:35] <sythe> :D Sweet
[05:35] <sythe> Hope it works
[05:36] <sythe> rcombs: Is there way to have ffmpeg auto-truncate an mp3 until audio begins?
[05:36] <sythe> E.g., it would only copy once silence stopped?
[05:38] <sythe> rcombs: It worked! :D
[05:38] <rcombs> I don't think there's a way to do that builtin, but if you can figure out exactly when the silence ends, you could put `-ss <seconds to skip>` before the -i for the audio file
[05:38] <sythe> The second -i?
[05:38] <rcombs> yeah
[05:39] <sythe> Sweet
[05:39] <sythe> rcombs: Is there an opposite of that?
[05:39] <rcombs> and you could use `ffmpeg -i episode3-final-voiceover.mp3 -af silencedetect -acodec pcm_u8 -f null /dev/null` to figure out how long the silence is
[05:39] <rcombs> sythe: to delay the audio, you mean?
[05:39] <sythe> Yeah
[05:40] <rcombs> note that https://www.ffmpeg.org/ffmpeg-filters.html is an excellent resource on this
[05:40] <sythe> Basically, I start recording one audio source, then alt-tab to start recording the other
[05:40] <sythe> So it's always 2-3 seconds delayed
[05:40] <rcombs> OK, so take a look at our current `-filter_complex` arg
[05:40] <sythe> rcombs: Thanks, but...these are like the only two I need to do
[05:40] <sythe> :/
[05:40] <rcombs> '[0:1][1:0]amix[out]' <-- think you can identify what each of these 4 parts does?
[05:41] <rcombs> (don't worry, I'll get from here to audio delaying)
[05:41] <sythe> Well, the first two are media stream IDs, I think
[05:42] <rcombs> yup, good
[05:42] <rcombs> and then we have the filter name ("amix") and a name for our output ("out")
[05:42] <rcombs> we make sure our "out" stream ends up in the output file with `-map '[out]'`
[05:43] <rcombs> our inputs currently are [0:1] ("first file, second stream" [because these are zero-indexed]) and [1:0] ("second file, first stream")
[05:43] <rcombs> so, we want to apply an "adelay" filter to our voiceover stream, which is [1:0]
[05:45] <rcombs> that filter will look something like this: '[1:0]adelay=1500[delayed]'
[05:45] <sythe> 1500 is 1.5 seconds, right?
[05:45] <rcombs> here I'm using 1500ms (1.5 seconds) for the delay; you'd fill in whatever you wanted
[05:45] <rcombs> yup!
[05:45] <sythe> :)
[05:46] <rcombs> so, if we add that filter, then we have a stream called "delayed"
[05:46] <rcombs> and now we want to use that delayed stream as an input for the "amix" filter from earlier
[05:46] <rcombs> so together, it'd be '[1:0]adelay=1500[delayed];[0:1][delayed]amix[out]'
[05:46] <rcombs> make sense?
[05:47] <sythe> ffmpeg -i episode-test-4.mp4 -i episode3-final-voiceover.mp3 -map 0:0 -vcodec copy -filter_complex '[0:1][1:0]adelay=1500[delayed];[0:1][delayed]amix[out]' -map '[out]' -acodec libmp3lame -ab 320k episode2.mp4
[05:47] <sythe> ?
[05:48] <sythe> I think I put it in the wrong place
[05:48] <rcombs> looks good to me
[05:49] <sythe> Strange
[05:49] <rcombs> erm, wait
[05:49] <rcombs> not quite
[05:49] <sythe> Thought so
[05:49] <rcombs> you've just got an extra [0:1] in there
[05:49] <rcombs> adelay only takes one input, and we only want to delay one of the streams anyway
[05:49] <sythe> The first one?
[05:49] <rcombs> yeah
[05:50] <sythe> ffmpeg -i episode-test-4.mp4 -i episode3-final-voiceover.mp3 -map 0:0 -vcodec copy -filter_complex '[1:0]adelay=1500[delayed];[0:1][delayed]amix[out]' -map '[out]' -acodec libmp3lame -ab 320k episode2.mp4
[05:50] <sythe> ?
[05:50] <rcombs> we only want to use [0:1] as an input to amix, not to adelay
[05:50] <rcombs> yeah, that should be better
[05:50] <sythe> ahhh
[05:50] <sythe> Thanks
[05:50] <sythe> I shall test this
[05:51] <rcombs> congrats, you've learned a bit about ffmpeg filtering!
[05:52] <sythe> Very strange
[05:53] <sythe> I never thought I would
[05:54] <sythe> Let's see if a 3.8 second delay works...
[05:54] <trn> rcombs: Hrrm, you might have helped me as well!
[05:54] <sythe> FAQ!
[05:54] <sythe> lol
[05:56] <rcombs> trn: I try!
[05:57] <trn> rcombs: Well, once Telia is fixed, I'll try it, and then I've got to figure out how to translate a working command to libavfilter ...
[05:57] <rcombs> ah, heh
[05:58] <rcombs> check out http://www.ffmpeg.org/doxygen/trunk/group__lavfi.html#ga6c3c39e0861653c71a23f90d1397239d
[05:58] <trn> So far the libraries have been very easy to use!
[05:59] <trn> See, too easy.
[06:00] <trn> rcombs: http://goo.gl/m3p9Gs
[06:00] <trn> Last time I spam it here, promise.
[06:00] <sythe> Thanks again, rcombs :)
[06:01] <sythe> I can finally record and upload gameplay videos on Linux w/o pain
[06:01] <trn> But yeah, that's where I'm at now, replicated my application behavior with the ffmpeg tool.
[06:02] <trn> And now that lavfi message re async seems to be much more ominous now, considering the nature of my problem!
[06:09] <rcombs> maybe try an actual concatenateable format for the intermediate, like .ts?
[06:09] <sythe> rcombs: For seconds to skip, it would be -ss 1000 for 1 second, eh?
[06:10] <rcombs> sythe: no, -ss is given in seconds, not milliseconds
[06:10] <sythe> Ok, thanks
[06:10] <rcombs> sythe: you can specify with greater precision by doing e.g. 1.024 or something (for 1024ms)
[06:10] <trn> rcombs: NUT is perfect for that because of the overhead, without the index no tables that grow, and fastest recovery from errors.
[06:12] <trn> And NUT is concatenteable :)  And passsed every raw torture test I threw it at, regarding error recovery in the least number of frames ...
[06:15] <rcombs> or, if you
[06:15] <rcombs> 're using raw video, why not just use yuv4mpegpipe
[06:16] <trn> rcombs: Mainly because I'm transporting 2-ch PCM mux'd with the video right out of ffmpeg appropriate decoder.
[06:17] <trn> Which I'm doing in an attempt to keep A/V sync issues at a minimum and avoid the possibility that a segment of a stream being moved around internally goes missing.
[06:18] <rcombs> have you checked if it works with MPEG-TS?
[06:18] <trn> But I admit I was rathe naive when I started, but the idea was to keep the two mux'd together for purposes of A/V sync ...
[06:19] <trn> rcombs: Not lately, no :)  I will give it a shot once telia fixes the network issues enough that I can send a stream here to my house in realtime :)
[06:19] <trn> But I actually see a solution here.
[06:20] <rcombs> the `-shortest` arg can probably help avoid issues with one stream ending before another in an input file
[06:21] <trn> I can have the decoding side of things send a message to it's input provider (easy to do here actually) with the PTS value of the most recently received frame.
[06:21] <rcombs> also, ffserver may or may not be worth looking at, depending on exactly what you're trying to do
[06:22] <trn> That way when another input connects, rather than starting to dump data with a new PTS of 0, it can request what PTS to start at, and then using setpts becomes easy.
[06:23] <trn> A tiny delay for internal message passing isn't a big deal if the frame provider has to reconnect anyway, or I wouldn't imagine it would be.
[06:29] <sythe> rcombs: I'm watching my own gameplay video
[06:29] <sythe> And I'm entertained.
[06:29] <sythe> >.<
[06:29] <sythe> #FFMPEGFTW
[06:34] <trn> Hrrm. :)
[06:34] <trn> demuxer+ffmpeg -> ist_index:1 type:audio pkt_pts:776021 pkt_pts_time:17.5968 pkt_dts:776021 pkt_dts_time:17.5968 off:0 off_time:0
[06:34] <trn> encoder <- type:audio frame_pts:775168 frame_pts_time:17.5775 time_base:1/44100
[06:34] <trn> encoder -> type:audio pkt_pts:771072 pkt_pts_time:17.4846 pkt_dts:771072 pkt_dts_time:17.4846
[06:34] <trn> muxer <- type:audio pkt_pts:771072 pkt_pts_time:17.4846 pkt_dts:771072 pkt_dts_time:17.4846 size:253
[06:34] <trn> then I broke the stream.
[06:35] <trn> Then I connected a new input.
[06:36] <trn> encoder <- type:audio frame_pts:848896 frame_pts_time:19.2493 time_base:1/44100
[06:36] <trn> encoder -> type:audio pkt_pts:844800 pkt_pts_time:19.1565 pkt_dts:844800 pkt_dts_time:19.1565
[06:36] <trn> muxer <- type:audio pkt_pts:844800 pkt_pts_time:19.1565 pkt_dts:844800 pkt_dts_time:19.1565 size:295
[06:36] <trn> demuxer -> ist_index:1 type:audio next_dts:19285781 next_dts_time:19.2858 next_pts:19285781 next_pts_time:19.2858 pkt_pts:850503 pkt_pts_time:19.2858 pkt_dts:850503 pkt_dts_time:19.2858 off:0 off_time:0
[06:36] <trn> demuxer+ffmpeg -> ist_index:1 type:audio pkt_pts:850503 pkt_pts_time:19.2858 pkt_dts:850503 pkt_dts_time:19.2858 off:0 off_time:0
[06:36] <trn> etc.
[06:39] <rcombs> I haven't looked closely enough at all of that to know if it looks good
[06:41] <trn> ah ha! It is good.
[06:41] <rcombs> :D
[06:41] <trn> Because when I turn on debug and cause the error it's the automatic filters added by async :)
[06:41] <trn> [graph 1 aresample for input stream 0:1 @ 0x1088d20] [SWR @ 0x103c9e0] discarding 772679 audio samples
[06:41] <trn> *** dropping frame 411 from stream 0 at ts 100:17.41 bitrate=N/A
[06:41] <trn> *** dropping frame 411 from stream 0 at ts 2
[06:41] <trn> *** dropping frame 411 from stream 0 at ts 3
[06:46] <trn> now I'm thinking how to fix :)
[06:55] <trn> It might be fixed now, but I can't tell until the fix the interweb tubes. :)
[06:57] <trn> I'll need to write a script that does bad things to the streams for hours and then I'll know when I see how well they are sync'd up
[07:23] <trn> rcombs: BTW, can't do raw/PCM in MPEG2TS unless there is a way to force it.
[07:24] <trn> A sender:
[07:24] <trn>   Stream #0:24 -> #0:0 (h264 (native) -> rawvideo (native))
[07:24] <trn>   Stream #0:1 -> #0:1 (aac (native) -> pcm_s16le (native))
[07:24] <trn> Receiving demuxer:
[07:24] <trn> [mpegts @ 0x18cc2a0] Could not find codec parameters for stream 0 (Unknown: none ([6][0][0][0] / 0x0006)): unknown codec
[07:24] <trn>     Stream #0:0[0x100]: Unknown: none ([6][0][0][0] / 0x0006)
[07:24] <trn>     Stream #0:1[0x101]: Audio: aac ([6][0][0][0] / 0x0006), 12000 Hz, 4.0, fltp, 396 kb/s
[14:50] <tbj> Hi, I have problem with ffserver - http://pastebin.com/byfmVyBC
[16:53] <kaotiko> hi
[17:18] <jusss> how to combine a srt file and a mp4 file?
[17:19] <jusss> let it be hardsub
[17:47] <tbj> kaotiko: Hi
[17:55] <sacarasc> jusss: I think you have to use -vf ass=blah.srt
[17:55] <ubitux> subtitles=blah.srt or ass=blah.ass
[18:02] <tbj> ubitux: Can you help me with enabling libx264, libx265, libfaac and libmp3lame? I tried these codecs, but I got some errors. Here is logfile: http://pastebin.com/byfmVyBC
[18:02] <ubitux> you don't have libfaac
[18:02] <ubitux> or at least the headers
[18:04] <tbj> ubitux: I have it, I installed it, but it doesn t work. And how I can add headers?
[18:04] <ubitux> if you're on a debian like that's probably in a libfaac-dev or something
[18:05] <JEEB> tbj, rather use libfdk-aac than faac, both are not compatible with x264 and non-distributable, but fdk is much better
[18:05] <JEEB> uhh
[18:05] <JEEB> s/x264/GPL/
[18:05] <JEEB> :V
[18:05] <tbj> I have Ubuntu 14.0.4 (I think)
[18:06] <JEEB> https://github.com/mstorsjo/fdk-aac
[18:06] <tbj> JEEB: Hi, thank for answer. I will try it-
[18:06] <JEEB> this is fdk-aac :3
[18:10] <tbj> JEEB: I tried fdk-aac and libfdk-aac, but it doesn t work
[18:10] <JEEB> what doesn't work?
[18:11] <tbj> First I tried modify it in ffserver.conf
[18:12] <JEEB> oh, so you built it and linked it into your ffmpeg?
[18:12] <JEEB> ffmpeg -codecs |grep "fdk"
[18:12] <JEEB> that should show the avcodec internal name
[18:14] <tbj> Now I tried to clone and install it from github, but no file makefile is there.
[18:15] <JEEB> it's an automake based project
[18:15] <JEEB> autoreconf -fiv
[18:15] <tbj> No result...
[18:15] <JEEB> creates them
[18:15] <JEEB> then you get a configure file and the main makefiles created
[18:17] <tbj> still no usable result in ffmpeg -codecs | grep "fdk"
[18:17] <JEEB> well yes, if you haven't built and installed and compiled ffmpeg with it :P
[18:17] <JEEB> just like you have to compile ffmpeg with faac if you want faac
[18:17] <JEEB> see the configure --help output of ffmpeg for the option to enable
[18:17] <JEEB> ./configure --help |grep "fdk" for ffmpeg's configure
[18:19] <tbj> of course... Please wait
[18:25] <slowguy> http://ur1.ca/hu2u9  i am using this command  ..but ffmpeg says No such filter 'trim'
[18:25] <slowguy> is trim filter now available now?
[18:25] <slowguy> not*
[18:29] <slowguy> [AVFilterGraph @ 0x1496660] No such filter: 'trim'
[18:29] <slowguy> strange :(
[18:36] <tbj> Error: libfaac not found
[18:37] <slowguy> ubitux: i already did that
[18:37] <ubitux> > and the COMPLETE console output.
[18:37] <slowguy> okay
[18:38] <slowguy> just a sec
[18:39] <slowguy> http://ur1.ca/hu2xc
[18:39] <slowguy> ubitux: ^
[18:40] <ubitux> > ffmpeg version 1.2.6
[18:40] <ubitux> too old probably
[18:40] <ubitux> we are in 2.3 currently
[18:40] <slowguy> ubuntu does not provide ffmpeg ?  i had to add some launchpad ppa
[18:40] <slowguy> and then i downloaded it
[18:40] <ubitux> it doesn't
[18:41] <ubitux> you can grab a static build here or use the fork
[18:41] <ubitux> or build it yourself
[18:41] <slowguy> static build = direct download and run the binary?
[18:41] <ubitux> yes
[18:41] <slowguy> right let em try
[18:46] <slowguy> ubitux: got it thank you
[18:47] <slowguy> ubitux: another thing..i wanted a very fast method to crop commercials from the serial without re-encoding ..am i doing right in your i opinion? or is there a faster way?
[18:47] <ubitux> it's re-encoding with your method
[18:48] <slowguy> yeah i noticed that :(
[18:48] <ubitux> use -ss and -t to extract, and concat demuxer to concat
[18:48] <slowguy> how can i prevent re-encoding?
[18:48] <slowguy> u mean separate ffmpeg commands?
[18:50] <slowguy> you mean first create multiple parts and then cocat them in the last?
[18:51] <slowguy> or is it possible using a simgle commands?
[18:52] <slowguy> i was searching on net but could not find how i can use multiple -ss -t in a simgle command
[19:01] <slowguy> ubitux: please give me a hint in the form of command..then i will find the rest myself..i will not bother you more for this simple thing
[19:02] <ubitux> ffmpeg -ss 123 -i input -t 45 -c copy -map 0 output0
[19:02] <ubitux> ffmpeg -ss 150 -i input -t 10 -c copy -map 0 output1
[19:02] <ubitux> etc
[19:02] <ubitux> then create a concat demuxer file with all the outputs
[19:02] <ubitux> and run the command
[19:02] <ubitux> look at the faq for concat
[19:02] <ubitux> and look for concat demuxer
[19:03] <slowguy> okay thank you so much
[19:03] <ubitux> http://ffmpeg.org/faq.html#Concatenating-using-the-concat-demuxer
[19:03] <tbj> Next question: How to protect stream using AES? I want to protect live stream.
[22:35] <MachinaeWolf> Are there any aac or faac/faad flags to emable aac support that I should be aware of?
[22:36] <c_14> What are you trying to do?
[22:37] <MachinaeWolf> Get moc to play my .m4a music files, it was compiled with ffmpeg support so I thought maybe ffmpeg might need those kinds of flags enabled too?
[22:37] <MachinaeWolf> because moc with the aac and related flags aren't doing it :(
[22:37] <c_14> You mean when compiling ffmpeg?
[22:37] <MachinaeWolf> yes
[22:38] <c_14> The decoder should be in by default so long as you don't disable anything iirc.
[22:39] <MachinaeWolf> ah
[22:39] <MachinaeWolf> Are you good with moc then lol?
[22:40] <c_14> I have no idea what that is.
[22:40] <MachinaeWolf> ok, just thought I'd ask
[23:24] <FrEaKmAn_> hi all..
[23:25] <FrEaKmAn_> I convert a file with -vcodec libx264 -acodec aac -strict -2 to mp4
[23:25] <FrEaKmAn_> now when I want to cut it, audio and video are not synced
[23:25] <FrEaKmAn_> if I do -c copy
[23:29] <c_14> try adding -vsync 0
[23:31] <FrEaKmAn_> strange
[23:31] <FrEaKmAn_> actually audio is ok, but video is incorrect
[23:31] <FrEaKmAn_> is libx264 good codec?
[23:31] <c_14> Define "good codec".
[23:32] <FrEaKmAn_> something that gives the best quality and can be easily cut and concatenated
[23:32] <FrEaKmAn_> easily == fast
[23:35] <c_14> h264 should do that, except for the "best quality" part which is subjective.
[23:36] <c_14> Can you pastebin your commands and their outputs?
[23:36] <FrEaKmAn_> ok
[23:37] <iive> FrEaKmAn_: have in mind that video is always cut at keyframe, so having small keyframe interval helps for precise cutting.
[23:38] <FrEaKmAn_> c_14: http://pastie.org/9423416
[23:38] <FrEaKmAn_> iive: ok
[23:39] <FrEaKmAn_> so based on the code, if I specify vcodec with cutting it works
[23:39] <FrEaKmAn_> but afaik, the original video should already have libx264 codec
[23:39] <c_14> What about the output?
[23:39] <c_14> Also, you can do the cutting in the first step.
[23:39] <iive> sorry, i mean when you -c copy, it cuts at keyframe.
[23:39] <FrEaKmAn_> so vcodec copy should be same as vcodec libx264
[23:39] <c_14> Also, try moving -ss and -t to the input side from the output side.
[23:40] <iive> when you reencode, you can start at any frame.
[23:40] <c_14> FrEaKmAn_: It isn't. -vcodec copy copies the stream, -vcodec libx264 reencodes the stream.
[23:40] <FrEaKmAn_> ow
[23:41] <iive> keyframes are like jpeg pictures, the p and b frames add only the difference from the already decoded frames.
[23:41] <iive> if you don't start decoding at keyframe, you would be getting quite funny result.
[23:41] <FrEaKmAn_> so I have to define vcodec everytime?
[23:41] <c_14> Until the next keyframe anyway.
[23:41] <c_14> No.
[23:42] <c_14> Unless you want to reencode.
[23:42] <FrEaKmAn_> I dont
[23:42] <FrEaKmAn_> just want to cut the file
[23:43] <FrEaKmAn_> so what do you suggest?
[23:44] <c_14> You could just change your first command to ffmpeg -i video.mp4 -t 20 -c:v libx264 -c:a aac -strict -2 -y video_out.mp4
[23:44] <c_14> Assuming you want to reencode in that step.
[23:44] <c_14> If you don't, use: ffmpeg -i video.mp4 -t 20 -c copy -y video_out.mp4
[23:45] <iive> first cut, then reencode.
[23:45] <c_14> you can reencode and cut in the same step.
[23:47] <FrEaKmAn_> ok
[23:47] <FrEaKmAn_> the point is I want to encode only once
[23:47] <FrEaKmAn_> and cut multiple times
[23:47] <FrEaKmAn_> and cutting has to be fast
[23:48] <FrEaKmAn_> if I cut and reencode, then it's kinda slow
[23:48] <FrEaKmAn_> I just don't understand
[23:48] <FrEaKmAn_> c_14: If you don't, use: ffmpeg -i video.mp4 -t 20 -c copy -y video_out.mp4
[23:48] <FrEaKmAn_> video.mp4 is already encoded?
[23:49] <FrEaKmAn_> or I must encode video_out.mp4?
[23:49] <iive> then set -keyint 25 to have keyframe every second. compression would suffer, but you'd have 1sec precise cutting points.
[23:52] <FrEaKmAn_> hm.. same results
[23:53] <FrEaKmAn_> http://pastie.org/private/tceebf77bsv6zsdidixlg
[23:55] <c_14> hmm, can't see why it would desync
[23:56] <FrEaKmAn_> me2
[23:56] <FrEaKmAn_> as I said, audio is ok
[23:56] <FrEaKmAn_> video is totally incorrect
[00:00] --- Sun Jul 27 2014


More information about the Ffmpeg-devel-irc mailing list