[Ffmpeg-devel-irc] ffmpeg.log.20170719
burek
burek021 at gmail.com
Thu Jul 20 03:05:01 EEST 2017
[00:08:58 CEST] <jcarpenter2> is there a good way to guess what codec to use based on the contents of a file?
[00:18:41 CEST] <jkqxz> Use what libavformat tells you when you open the file?
[00:20:33 CEST] <jcarpenter2> how does it tell you?
[00:28:17 CEST] <jkqxz> Call avformat_find_stream_info().
[02:08:37 CEST] <c3r1c3-Win> Is there some place that I can look up what the AVERROR(AV_LOG_ERROR) numbers mean? (and actually all the various libav error codes?)
[02:08:45 CEST] <Diag> probably google
[02:11:03 CEST] <jkqxz> They are negated errno, with a few extra ones which are in <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavutil/error.h>.
[03:42:49 CEST] <c3r1c3-Win> I guess that means it doesn't exist.
[03:43:37 CEST] <c3r1c3-Win> Why the hell would one have errors codes but no decoder? (And yes I am aware of the code call that sorta does it)
[03:48:07 CEST] <jcarpenter2> c3r1c3-Win: use av_strerror
[08:48:59 CEST] <dynek> hey - I'm trying to find out how ffmpeg outputs jpg files, which library is it using? gdb doesn't tell me libjpeg is called.
[09:12:49 CEST] <thebombzen> dynek: it uses the system libjpeg
[09:13:09 CEST] <thebombzen> IIRC
[09:13:21 CEST] <thebombzen> if it doesn't then it's using its own internal jpeg encoder. it would be one of those two
[09:13:51 CEST] <thebombzen> sorry, scratch that. It uses its own internal jpeg encoder
[09:21:22 CEST] <dynek> thebombzen: thanks for your answer - what I'm trying to find out is why cpu consumption is so heavy when reading a h264 stream and capturing a jpg per second.
[09:21:56 CEST] <dynek> Reading the stream and restreaming it using hls uses close to 0% cpu but capturing a jpg every second uses like 40/50% cpu
[09:21:59 CEST] <dynek> will do
[09:22:32 CEST] <thebombzen> if you're reading the stream in realtime, that shouldn't really use all that much cpu. if you're reading it faster than realtime then FFmpeg will usually use 100% because it'll go as fast as possible
[09:32:52 CEST] <dynek> thebombzen: https://pastebin.com/3LxSkjxu
[09:34:05 CEST] <thebombzen> dynek: that's because to output a jpeg once a second requires you to decode the video stream
[09:34:16 CEST] <thebombzen> whereas to restream it you don't have to decode the H.264 to raw video first
[09:34:28 CEST] <thebombzen> also, "complete console output" means complete console output
[09:34:41 CEST] <dynek> ah sorry for that
[09:36:10 CEST] <dynek> I gave va-api a try but wasn't much successful with my AMD CPU w/ GPU integrated - The best way to go might be to select a less cpu consuming codec on the streamer end
[09:39:45 CEST] <Fig1024> what resolution is the video
[09:40:19 CEST] <dynek> Stream #0:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 29.83 tbr, 90k tbn, 2108081.23 tbc
[09:53:10 CEST] <Fig1024> so you are decoding and encoding 1920x1080 jpeg picture once a second?
[10:10:17 CEST] <dynek> I guess that's how it should be phrased yes, reading an h264 stream restreamed as-is (copy) using hls and decoding the stream to take a screenshot per second
[10:12:28 CEST] <dynek> minimum 60% cpu
[10:15:40 CEST] <Fig1024> 60% across all cores?
[10:17:00 CEST] <dynek> nope a single core
[10:17:18 CEST] <thebombzen> once a second?
[10:17:20 CEST] <thebombzen> not a surprise
[10:17:42 CEST] <dynek> I'm just "worried" because I am currently testing this to monitor network camera and if a single camera consumes between 60 to 100% cpu I wonder how the machine will behave with 8/10 cameras :-)
[10:17:52 CEST] <Fig1024> I'm guessing it has to decode a lot of frames depending on keyframe interval. If you are doing C++ stuff you can check packet flags to see if video packet has full frame and decode only that one
[10:18:24 CEST] <dynek> thebombzen: what do you mean "once a second" - if I wanted 5 frames per second or 1 every minute it will anyway always decode the stream, right? It won't change cpu consumption
[10:18:58 CEST] <dynek> Fig1024: not doing C++ - I'm using shinobi (nodejs) which calls ffmpeg behind the scenes
[10:19:37 CEST] <thebombzen> still not a surprise
[10:19:42 CEST] <thebombzen> 60% of one core is not very much
[10:19:49 CEST] <thebombzen> for decoding an HD video stream
[10:22:13 CEST] <dynek> Are you familiar with va-api? Any idea if that would help in such case?
[10:23:36 CEST] <Fig1024> if you only need to decode for preview purposes, the main CPU saver would be to only examine packets with keyframe
[10:24:19 CEST] <dynek> and is that doable from command line somehow?
[10:48:15 CEST] <kerio> -discard nokey
[10:48:45 CEST] <kerio> or something
[10:58:43 CEST] <dynek> this doesn't seem to lower cpu consumption - I'll try to lower quality of the camera of use va-api
[10:58:54 CEST] <dynek> Thank you guys for your answers
[11:46:40 CEST] <dynek> I'm giving it a try with va-api (might speed up decoding hopefully) however it fails with https://pastebin.com/1TBfc1EN
[11:46:43 CEST] <dynek> full output this time :-)
[13:20:22 CEST] <wondiws> hi, do you guys support HE-AAC encoding?
[13:26:01 CEST] <JEEB> wondiws: the included AAC encoder IIRC doesn't but if you build with fdk-aac you can get HE and HEv2
[13:26:57 CEST] <JEEB> the problem with fdk-aac is that you cannot distribute FFmpeg built with it :)
[13:27:29 CEST] <wondiws> JEEB, yeah, I suppose so. Debian seems to distribute fdk, and they are puritan :P
[13:28:07 CEST] <furq> no they don't
[13:28:27 CEST] <furq> they distribute a separate fdk-aac, which is fine
[13:28:42 CEST] <furq> you just can't link it to ffmpeg and distribute it without breaching the gpl
[13:29:01 CEST] <wondiws> furq, oh, I was under the assumption it was about some HE-AAC patent
[13:29:19 CEST] <furq> no it's some dumb gpl-incompatible clause in the fdk license
[13:29:22 CEST] <JEEB> yea
[13:29:29 CEST] <furq> it's still an oss license, so debian can distribute it
[13:29:35 CEST] <furq> you just can't link it to anything lgpl/gpl
[13:29:35 CEST] <JEEB> separate, that is
[13:30:22 CEST] <JEEB> also I'm not sure about LGPL, but it is generally disabled without enable-nonfree in FFmpeg's configure IIRC
[13:31:24 CEST] <JEEB> wondiws: FFmpeg only distributes source code which is why it doesn't give a fuck about patents, and debian generally seems to not care too much either (other than it defaults to an install without those enabled by default)
[13:31:47 CEST] <furq> the current debian ffmpeg doesn't care about patents
[13:31:50 CEST] <JEEB> although to be honest if you start looking into patents you'll find that everything is patented (at least in the US)
[13:31:53 CEST] <furq> it's shipped with x264 for ages
[13:31:59 CEST] <JEEB> furq: well you have libavcodec and libavcodec-extra
[13:32:06 CEST] <JEEB> by default libavcodec gets installed
[13:32:11 CEST] <JEEB> -extra contains x264 etc
[13:32:14 CEST] <JEEB> that's what I meant
[13:32:21 CEST] <JEEB> that the default install comes without it, but it's there :)
[13:32:48 CEST] <furq> https://packages.debian.org/buster/libavcodec57
[13:32:52 CEST] <furq> links to libx264 and libx265
[13:32:57 CEST] <furq> i'm not sure what -extra does any more
[13:33:27 CEST] <furq> apparently it's just amr stuff
[13:33:39 CEST] <furq> Because this package links against libraries that are licensed under Apache License 2.0, the resulting binaries are distributed under the GPL version 3 or later.
[13:33:42 CEST] <furq> aha
[13:34:14 CEST] <furq> but yeah `apt-get install ffmpeg` will get you an ffmpeg with patented codecs, and has done for years
[13:34:24 CEST] <furq> at least if you were running testing
[13:34:59 CEST] <furq> the only distro i've seen lately which is militant about removing patented stuff is opensuse
[13:35:28 CEST] <wondiws> I'm lazy: but how do you extract an AC3 (the third one specifically) from a VOB?
[13:35:45 CEST] <furq> -i foo.vob -c:a copy -map 0:a:2 out.ac3
[13:35:46 CEST] <wondiws> I'm googleing it, but if you know it by heart, you can tell :)
[13:35:52 CEST] <wondiws> thanks
[13:36:42 CEST] <JEEB> furq: ok - it might have changed then. I still remember the non-extra package being theora and vorbis and stuff only
[13:36:50 CEST] <JEEB> could have changed since, I don't use debian every day :)
[13:37:32 CEST] <furq> i remember that as well but i can't find any packages bearing it out
[13:37:35 CEST] <furq> https://packages.debian.org/wheezy/libavcodec53
[13:37:45 CEST] <furq> i guess they could have updated that since changing policy though
[13:38:38 CEST] <furq> i do remember being pleasantly surprised when i updated to stretch and the distro ffmpeg was actually good enough to use
[13:52:23 CEST] <wondiws> furq, and for video, I have to do: "ffmpeg -i foo.m4v -c:v copy out.m4v"?
[13:53:15 CEST] <wondiws> I tried to make a very low bitrate video with Handbrake, 300kbps x265, but the AAC audio is almost that size, so I want to correct that ;)
[13:57:58 CEST] <JEEB> you've got nothing on floppy_tiger.mkv boy
[13:58:07 CEST] <JEEB> which is 22 minutes of video in 1.44MiB
[13:58:09 CEST] <JEEB> with audio
[13:59:50 CEST] <furq> https://www.youtube.com/watch?v=jB0vBmiTr6o
[13:59:58 CEST] <furq> you could get 360 of these in that space
[14:03:43 CEST] <wondiws> JEEB, that's amazing
[14:06:00 CEST] <wondiws> furq, if I extract that from youtube I get 70MB
[14:10:31 CEST] <JEEB> wondiws: granted at such low rates the headers of things start eating space so not only the resolution but the frame rate had to be dropped
[14:10:36 CEST] <JEEB> so it wasn't the original 24Hz
[14:26:02 CEST] <wondiws> JEEB, but can ffmpeg extract x264 from mp4? Because I get a message "could not find tag" or something
[14:27:16 CEST] <JEEB> it can demux H.264 and HEVC from ISOBMFF, yes
[14:27:19 CEST] <JEEB> unless the file is broken
[14:27:38 CEST] <JEEB> `ffmpeg -i hurr.mp4 -c copy out.264`
[14:28:00 CEST] <JEEB> should extract the H.264 track as raw Annex B bit stream
[14:28:20 CEST] <JEEB> (named so because it's specified in the Annex B of the H.264/AVC specification)
[15:23:45 CEST] <flenoir> hi all
[15:27:18 CEST] <flenoir> I'm looking for a way to encode a file in XDCAM HD 422 50Mbps. I tested ffmbc and it's working but it seems slow even on changing threads. how is it possible to encode in XDCAM using ffmpeg ?
[15:31:16 CEST] <Fig1024> using NVidia encoder is the best way to save CPU power. Intel Graphics has QuickSync encoder which is about 50% CPU saving
[15:36:15 CEST] <dystopia_> you are trying to encode a file that is "XDCAM HD 422 50Mbp" to some other format
[15:36:40 CEST] <dystopia_> or your are trying to encode some video too "XDCAM HD 422 50Mbp" ??
[15:41:28 CEST] <furq> flenoir: the mpeg2video encoder doesn't support frame multithreading
[15:42:31 CEST] <flenoir> @dystopia i want to encode a file to XDCAM HD as output
[15:42:59 CEST] <flenoir> @furq is there any other solution using ffmpeg, ?
[15:46:21 CEST] <furq> https://superuser.com/questions/666862/ffmpeg-xdcam-hd-vtag-profiles
[15:46:23 CEST] <wondiws> does Handbrake use ffmpeg?
[15:46:33 CEST] <furq> it uses the libs
[15:48:15 CEST] <wondiws> furq, how do I mux video.256 and audio.m4a to output.mp4?
[15:48:32 CEST] <furq> -i video.264 -i audio.m4a -c copy out.mp4
[15:48:44 CEST] <furq> also you didn't need to demux the video stream to do that
[15:55:29 CEST] <Arsen> How do I know what value for align do I tell the swscaler to use when converting an RGB888 array to an AVFrame
[15:58:19 CEST] <Arsen> Actually, here is the actual warning [swscaler @ 0x7fda64009da0] Warning: data is not aligned! This can lead to a speedloss
[15:58:33 CEST] <Arsen> It seems to make a problem with eg. imgur making me unable to upload my mp4's
[15:59:27 CEST] <JEEB> well that warning by itself just means that the input data to swscale is not aligned, and thus it's slower than it could be with aligned data
[15:59:44 CEST] <JEEB> so if there's a failure it's later in the chain
[16:00:06 CEST] <Arsen> hmm.. when I just use the FFmpeg CLI it works which is why I assumed that's the issue
[16:00:44 CEST] <JEEB> also you should be allocating those buffers with the av_ allocators which IIRC try to make sure the data is aligned
[16:00:59 CEST] <JEEB> I don't remember aligned on how many bytes now
[16:01:05 CEST] <JEEB> 16 or so?
[16:01:11 CEST] <Arsen> The inputs come from QImages though
[16:01:40 CEST] <JEEB> well, in any case the non-alignment warning is not fatal
[16:01:50 CEST] <JEEB> so if there's a failure it's somewhere else
[16:01:54 CEST] <Arsen> Okay. I'll try to debug further
[16:02:16 CEST] <JEEB> basically it just means "warning! your data is not X byte aligned and thus not all optimized code paths can be utilized"
[16:02:17 CEST] <Arsen> Oh, I need to add, the video files are just fine when viewed in, eg, VLC
[16:02:30 CEST] <Arsen> oh I see, so that's what speedloss means
[16:02:55 CEST] <JEEB> yea, a lot of assembly requires a certain alignment with data
[16:03:24 CEST] <JEEB> (you do have some filters which actually fail/crash/etc with "bad" alignment)
[16:03:31 CEST] <JEEB> (but 'scale' is not one of them)
[16:03:44 CEST] <Arsen> I don't use any filters (other than scale) so I should be good
[17:15:08 CEST] <SolarAquarion> libavdevice/libavdevice.so: undefined reference to `ff_reverse'
[17:15:17 CEST] <SolarAquarion> i'm building git and i keep on having this issue
[17:16:52 CEST] <kepstin> SolarAquarion: built from a clean git checkout?
[17:16:53 CEST] <JEEB> sounds like you haven't cleaned properly
[17:17:12 CEST] <SolarAquarion> kepstin, JEEB i'm even doing out of directory builds
[17:17:26 CEST] <JEEB> yea, that's all n' good :)
[17:17:43 CEST] <kepstin> SolarAquarion: into a clean directory, then? :)
[17:17:45 CEST] <JEEB> but yea, ff_xxxx should be an internal symbol so it should be there unless something's badly wrong
[17:17:59 CEST] <SolarAquarion> JEEB, i'm getting that when im doing i'm doing into a clean directory
[17:18:08 CEST] <JEEB> http://fate.ffmpeg.org/
[17:18:12 CEST] <kepstin> that particular symbol is in libavutil, and should be pulled in during linking
[17:18:38 CEST] <JEEB> seems like shared builds on FATE pass
[17:20:10 CEST] <SolarAquarion> rm -rf $srcdir/build
[17:20:10 CEST] <SolarAquarion> mkdir $srcdir/build
[17:20:10 CEST] <SolarAquarion> cd $srcdir/build
[17:20:10 CEST] <SolarAquarion> rm -rf $srcdir/$pkgname/config.h
[17:20:10 CEST] <SolarAquarion> ../$pkgname/configure \
[17:20:40 CEST] <SolarAquarion> config.h was blocking my clean build directory
[17:21:02 CEST] <JEEB> that just means that at some point you had built in the src dir
[17:21:17 CEST] <SolarAquarion> JEEB, yes, i suppose
[17:21:17 CEST] <JEEB> git clean -dfx is actually better for "please just clean this source directory completely"
[17:21:39 CEST] <JEEB> you will lose any additional files or directories in the git repo of course :P
[17:23:45 CEST] <SolarAquarion> JEEB, since i am doing it via PKGBUILD, i just deleted the git one, and the srcdir
[17:24:07 CEST] <SolarAquarion> the bare repository and the actual src one
[17:43:27 CEST] <SolarAquarion> JEEB, i still get it
[17:50:36 CEST] <kepstin> SolarAquarion: maybe pastebin your configure line and the last couple hundred or so lines of 'make V=1' ?
[17:52:59 CEST] <SolarAquarion> here's the PKGBUILD i use, i just didn't all the make V=1 yet
[18:10:10 CEST] <SolarAquarion> kepstin, JEEB https://pastebin.com/0qW7tfF6
[18:11:53 CEST] <kepstin> SolarAquarion: I was hoping to see the *configure command line* that you are running, and the make output with "V=1" specified please
[18:12:25 CEST] <kepstin> since otherwise it doesn't show any of the compiler or linker options, so it's pretty much impossible to see what's breaking
[18:12:28 CEST] <SolarAquarion> kepstin, it is and the configure command line is in the PKGBUILD
[18:13:16 CEST] <SolarAquarion> fuck i get t
[18:14:00 CEST] <kepstin> I don't know what this PKGBBUILD is or where to find it, if it contains the configure line can you include that in the paste too?
[18:15:35 CEST] <SolarAquarion> kepstin, http://ix.io/yzQ
[18:27:10 CEST] <SolarAquarion> kepstin, https://pastebin.com/i5gykGZ3
[18:27:22 CEST] <SolarAquarion> here's the one with the proper make v=1
[18:27:43 CEST] <kepstin> thanks. I've been playing around with it here and haven't been able to reproduce the issue
[18:28:09 CEST] <SolarAquarion> i don't know what's going on in my side
[18:28:10 CEST] <kepstin> btw, that PKGBUILD is setting CC and CXX to clang, which is being ignored, might as well drop those lines.
[18:28:44 CEST] <SolarAquarion> kepstin, and the change in CC probably doesn't matter right
[18:28:57 CEST] <kepstin> I don't expect it to.
[18:33:58 CEST] <SolarAquarion> kepstin, shouldn't that so be rebuilt like any .so that's created by a build and therefore only have the newest stuff i guess
[18:34:12 CEST] <SolarAquarion> especially if you do a out of directory build
[18:34:28 CEST] <kepstin> it's actually failing on the ffmpeg cli tool link, not the libraries
[18:35:02 CEST] <SolarAquarion> kepstin, so, what would the fix be?
[18:35:20 CEST] <kepstin> I still haven't figured out why it's breaking, so I have no idea how to fix it
[18:37:06 CEST] <kepstin> just curious - does it fail only when upgrading (i.e. you already have an ffmpeg installed) or also on new builds?
[18:37:40 CEST] <SolarAquarion> kepstin, new builds and upgrades
[18:37:52 CEST] <SolarAquarion> like i changed to the official ffmpeg
[18:38:12 CEST] <SolarAquarion> and then started to build ffmpeg "full" after
[18:43:36 CEST] <kepstin> this is made doubly weird by the fact that libavdevice - at least in the git master checkout that I have here - doesn't even use the ff_reverse symbol at all.
[19:33:36 CEST] <momomo> anyone with experience with satellite setups? tvheadend?
[19:34:22 CEST] <momomo> is the outer cable stuff on a coaxial satellite cable ( lnb to reciever ) important? or only the inner? should one keep the stuff on the outside when adding the head to the cable?
[19:39:20 CEST] <kepstin> momomo: the "outer stuff" is the cable shielding, and when the connector is properly installed, it should be electrically connected to the body of the connector.
[19:40:50 CEST] <momomo> kepstin: ook, so one can not discard it or cut that part of then?
[19:41:07 CEST] <momomo> i am not able to get any satellite signal at all .. maybe that's the reason?
[19:41:32 CEST] <momomo> when adding the connector that is
[19:42:05 CEST] <kepstin> hmm, this isn't really the place for troubleshooting physical cable connections.
[19:42:22 CEST] <momomo> kepstin: i know .. but there isn't any better place .. and there's no activity anyway
[19:42:26 CEST] <momomo> and it's kind of related :p
[19:42:31 CEST] <kepstin> like, you can just look up youtube videos on how to attach connectors to cables, and some of them might even be correct.
[19:42:49 CEST] <Mavrik> Failing to connect the shielding on coaxial cable will give you bad time :P
[19:43:10 CEST] <kepstin> is this an F connector on RG-6-style cable?
[19:43:26 CEST] Action: kepstin hates those things, but they're industry standard around here... :/
[19:43:36 CEST] <momomo> Mavrik: no shit ... it is so fragile that most of it feels like it's not important .. when adding the connector which is tight most of it kind falls off .. maybe my cable is shit though
[19:45:33 CEST] <momomo> google is so crap nowadays that youtube is much much better to find info on most things
[19:45:41 CEST] <momomo> https://youtu.be/XXQ0a1XPnrc?t=114
[19:46:20 CEST] <kepstin> that statement is a bit nonsensical, considering that youtube is a part of google ;)
[19:46:21 CEST] <momomo> i am not sure what he means at 114 .. since he kind of keeps after
[19:46:42 CEST] <momomo> kepstin: :p you know i meant the search engine :p
[19:51:59 CEST] <kepstin> but yeah, attaching an f connector should be, iirc, remove about S" to ½" of the outer plastic casing, fold back the metal shielding over the cable, remove the remaining plastic layer to the core, put connector over top.
[19:52:11 CEST] <kepstin> been a while since I've done one, tho.
[20:09:09 CEST] <Gidian> I'm trying to make a video from a still image and audio at 1080p. It's an hour long and for youtube... would lowering the framerate be wise?
[20:09:43 CEST] <JEEB> if you are uploading for youtube you want it as pristine as possible
[20:09:48 CEST] <JEEB> since they will re-encode it anyways
[20:10:07 CEST] <JEEB> and while youtube will kill your content anyways, in general the better the source the better it is for you
[20:10:16 CEST] <Gidian> Haven't done it yet because I'm not sure how to write out the command, just installed it yesterday.
[20:10:33 CEST] <Gidian> Just worried about filesize
[20:10:40 CEST] <JEEB> the only reason to lower bit rate would be "uploading X megabytes would take too long with my network connection" :P
[20:10:49 CEST] <Gidian> Gotcha
[20:10:57 CEST] <JEEB> since youtube re-encodes
[20:11:08 CEST] <JEEB> and will keep your source and re-encode it again if they do major changes to their workflow
[20:11:58 CEST] <Gidian> I made a 10hour ambient audio/video with movie maker, uploaded it to youtube and it came out 240p
[20:12:16 CEST] <JEEB> SolarAquarion: seems like your issue got just found on mingw-w64 toolchain
[20:12:20 CEST] <Gidian> How do people get 10 hour content uploaded to youtube and still have high quality
[20:12:29 CEST] <furq> Gidian: the 1080p version will show up later
[20:12:34 CEST] <JEEB> SolarAquarion: https://ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213790.html
[20:12:53 CEST] <furq> also if it's a still image then you might as well lower the framerate
[20:13:02 CEST] <JEEB> possible ways to test a fix https://ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213815.html
[20:13:03 CEST] <Gidian> That's what I was thinking
[20:13:28 CEST] <JEEB> well the encoder should in theory use references and use a lot less bandwidth to begin with :P
[20:13:36 CEST] <JEEB> that's why one uses x264's CRF mode etc
[20:15:10 CEST] <Gidian> care to help a newb out with a command to use for sych a video?
[20:15:15 CEST] <Gidian> such*
[20:15:19 CEST] <DHE> Gidian: it starts at 240p while youtube does background processing. if it's 10 hours, give it a while. by tomorrow you may have 1080p options available
[20:15:37 CEST] <Gidian> Good to know
[20:15:45 CEST] <DHE> this is one reason why the big youtube users will upload stuff ahead of time as private and make it live at a later time
[20:16:30 CEST] <JEEB> with a static picture you could just use -c:v libx264 -crf 0 -x264-params "keyint=inf"
[20:17:02 CEST] <JEEB> lossless video, infinite (maximum) keyint
[20:17:19 CEST] <JEEB> (since I don't remember if any value for -g maps to the infinite one
[20:18:02 CEST] <Gidian> Thanks I
[20:18:08 CEST] <Gidian> I'll try that one out
[20:18:47 CEST] <JEEB> youtube recommends shorter GOPs but that's just seemingly bullshit/more convenient for them
[20:19:02 CEST] <JEEB> since IIRC they do a mezzanine encode on their side anyways, and then transcode that in chunks
[20:20:22 CEST] <DHE> that is true. but I suppose the worst that could happen is it would fail to transcode quickly and you'd be stuck waiting longer for the high bitrate versions
[20:20:36 CEST] <Gidian> And where I point to the audio file and img file in that command would matter I assume
[20:21:05 CEST] <Gidian> very new to this
[20:21:21 CEST] <Gidian> But I'm pretty sure ffmpeg is the solution I've been looking for
[20:21:46 CEST] <Gidian> Exporting hours of audio with an image using software is just ridiculous
[20:32:31 CEST] <JuanPotato> Hey, if I know of a an open source project that is violating ffmpeg's license, where do i go to do something about it. Specifically it is https://github.com/DrKLO/Telegram . They statically link ffmpeg and will often publish a new version to the appstore before github. And they compile using `--enable-gpl`
[20:40:55 CEST] <furq> uh
[20:41:03 CEST] <furq> is your objection that they publish to the app store before pushing to github
[20:41:08 CEST] <furq> i don't think that constitutes a gpl violation
[20:42:06 CEST] <c_14> It could be a violation if they don't link to the ffmpeg source code on the app store/within the app.
[20:42:12 CEST] <c_14> But I'm not sure whether or not they do
[20:42:32 CEST] <BtbN> Isn't it also a violation as you can't use your own ffmpeg, due to app signing?
[20:42:49 CEST] <BtbN> so technically all Appstore-Apps using ffmpeg, even a shared one, are in violation
[20:43:59 CEST] <furq> that may be true but i can't imagine michael will want to spend time on that sort of thing
[20:44:08 CEST] <furq> or whoever's in charge of license enforcement, if there is anyone
[20:44:17 CEST] <c_14> I'm not sure how much that constitutes a GPL violation since you could get the app/relink and then run it on a system which doesn't enforce app signing
[20:46:50 CEST] <furq> yeah this is the android source, so you could absolutely use your own ffmpeg if you wanted to
[20:46:59 CEST] <furq> idk about iOS or whether the iOS version uses ffmpeg
[20:51:00 CEST] <Gidian> I'm reading through documentation trying to figure out where I need to plug in my source files to get an output with the command JEEB gave me, and I'm struggling to make sense of it.
[20:51:04 CEST] <Gidian> Anyone care to help?
[20:51:51 CEST] <Gidian> Still image & audio to a video
[21:04:26 CEST] <c_14> Gidian: that's only part of the command. Put it before the output "file" and it sets the video settings
[21:11:18 CEST] <Gidian> Just used "ffmpeg -i desktop/image.png -i desktop/Audio.flac -c:v libx264 -crf 0 -x264-params "keyint=inf" Output.mp4", my video completed and there is no image, just black. But it used the image's dimensions/
[21:11:52 CEST] <Gidian> Video is also not compatable with quicktime?
[21:16:32 CEST] <c_14> add -pix_fmt yuv420p
[21:21:22 CEST] <Gidian> image is now white
[21:21:30 CEST] <Gidian> and blank still
[21:23:00 CEST] <c_14> eeeh, get rid of the x264params and try again?
[21:23:04 CEST] <c_14> don't think that should matter though
[21:23:09 CEST] <c_14> if that doesn't work
[21:24:31 CEST] <Diag> Can ffmpeg merge multiple source files into a chain?
[21:24:37 CEST] <Diag> video wise
[21:25:08 CEST] <c_14> https://trac.ffmpeg.org/wiki/Concatenate
[21:25:16 CEST] <Diag> thanks!
[21:25:21 CEST] <Diag> thats the word i couldnt remember
[21:27:55 CEST] <Blubberbub> i tried to figure out what concat does and got very confused because it looked totally different every time i looked. turns out there are at least 3 different ways to concat stuff - and you can probably combine them too :D
[21:28:25 CEST] <BtbN> primarily depends on how similar your stuff is you want to concat
[21:36:41 CEST] <Blubberbub> i think it works like this, but i might be totally wrong about it: concat:// just concats the binary data from disk, concatdec-format does demux the data and concats the encoded data and the concat-filter allows to concat already decoded samples?
[21:46:58 CEST] <dystopia_> if i have an encrypted video, and i have the key
[21:47:11 CEST] <dystopia_> who would i go about decrypting it with ffmpeg?
[21:47:28 CEST] <dystopia_> how*
[21:47:43 CEST] <marianina8> anyone know the best way to get frame time code like HH:MM:SS:FF from ffmpeg/ffprobe?
[21:56:25 CEST] <fauxton> hey guys, I have a fairly specific application question about archiving ~3TB of security camera video
[21:57:16 CEST] <fauxton> There are two types of files that I need to archive but they are separate. I will start explaining the first one
[21:59:07 CEST] <fauxton> So the first type is separated by camera number, there are roughly 11000 files per camera. I want to take like every 30th frame or so and concatenate the whole thing into one file per camera that plays "sped up" like a time lapse
[22:00:56 CEST] <thebombzen> marianina8: it'll be measured in seconds
[22:01:20 CEST] <thebombzen> if you want it in HH:MM:SS you'll need to convert it yourself
[22:01:23 CEST] <thebombzen> although you usually don't
[22:01:50 CEST] <kepstin> fauxton: that's easy enough, transcode each individual file doing the time lapse thing via filters, save to e.g. mpeg-ts, concatenate the resulting files. Consider re-encoding the final file if needed.
[22:01:56 CEST] <thebombzen> usually don't want it in HH:MM:SS I mean, especially if you're using a script
[22:02:15 CEST] <kepstin> fauxton: what's the length of each source file (time)?
[22:02:40 CEST] <kepstin> (and format?)
[22:03:44 CEST] <fauxton> kepstin: ok, is there a way to do like a batch for all 11k files? They are in 32MB AVI files, 640x480, which ends up being 24 seconds long
[22:04:44 CEST] <kepstin> fauxton: you'd write a script to do it, in shell or batch or whatever. This isn't the sort of thing that ffmpeg has tools to handle internally..
[22:05:16 CEST] <kepstin> wow, 11000 avi files, each 24 seconds long? that's gonna be annoying to deal with :)
[22:05:55 CEST] <kepstin> that said, generating a playlist file and using the concat demuxer (format) would probably work well for this
[22:06:22 CEST] <kepstin> could be done in all one command then, assuming the source files use a video codec that's ok with the concat format.
[22:07:30 CEST] <Coco> Is there a way to probe for luma/chroma levels for each frame? Looking to do my own plots as still graphs over time, but not sure how to pull the data.
[22:07:41 CEST] <fauxton> kepstin: I see, concat-ing them first would definitely make them easier to deal with, windows gets pretty mad when I pull up the folder
[22:08:47 CEST] <kepstin> fauxton: i'd propose doing this by making a playlist file in concat format (see https://www.ffmpeg.org/ffmpeg-formats.html#concat-1 ) listing all the files; then you can use that as an input to ffmpeg, and the rest is just a filter chain to do the "time lapse" effect, and encode to a single output file.
[22:13:19 CEST] <kepstin> to do the time lapse effect - if you e.g. want to take every 30th frame, and play the result at 30fps, something like -vf 'select=not(mod(n\,30)),settb=1/30,setpts=N' will do it.
[22:14:33 CEST] <fauxton> kepstin: thanks, let me see what I can get it to do on a smaller amount of files to test
[22:18:19 CEST] <furq> `-vf setpts=PTS/30,fps=30`
[22:18:23 CEST] <furq> is probably quicker
[22:21:42 CEST] <fauxton> for some reason that yielded a very slightly smaller file, but since I am doing a small amount to test I couldn't discern any difference in procesing time
[22:22:46 CEST] <kepstin> hmm, that setpts,fps chain will do slightly different results depending on the input framerate
[22:24:24 CEST] <fauxton> kepstin: I believe it's all 29.997 NTSC I don't know if there's anything to gain in processing speed at this point. I am on a pretty fast machine so it's not a big concern
[22:25:28 CEST] <Diag> fauxton: Celeron dual core, 1.2ghz?
[22:26:38 CEST] <fauxton> Diag: no, I sprung for the 1.6GHz
[22:27:35 CEST] <fauxton> alright guys, I'm a command line dingus, how do I create the list with 11k file names? all the files are in one folder with no other files in it
[22:27:42 CEST] <furq> what os
[22:27:42 CEST] <Diag> fauxton: install Ubuntu in a VM and run the given script through bash
[22:27:56 CEST] <furq> yeah don't do that
[22:28:10 CEST] <furq> if this is windows then just install msys2 or cygwin or something
[22:28:15 CEST] <furq> that gives you a perfectly usable bash prompt
[22:28:16 CEST] <fauxton> furq: win7 on a 32 thread xeon machine
[22:28:34 CEST] <furq> tbh this is easy enough to do with batch
[22:28:38 CEST] <furq> i just don't remember how
[22:29:18 CEST] <furq> for /r %i in (*) do ffmpeg -i "%i" ...
[22:29:23 CEST] <furq> according to stackoverflow
[22:36:55 CEST] <marianina8> thebombzen: Thanks! although here the current command i'm using to extract frames...is there a way to modify this command to show the frame time? ffprobe -show_frames -of compact=p=0 -f lavfi "movie=testdata/avengers5.mp4,select=isnan(prev_selected_t)+gte(t-prev_selected_t\,2)"
[22:39:28 CEST] <fauxton> hmm I'll give that a try
[22:51:36 CEST] <marianina8> for some reason some of the videos im testing with just show a duration of N/A
[22:54:12 CEST] <Azrael_-> hi
[22:55:20 CEST] <Azrael_-> i want to export every frame of a video as jpeg. in all guides i saw until now i have to explicitly set the framerate, no option to just tell to export every frame. would this be possible?
[23:00:40 CEST] <kepstin> Azrael_-: if you use the option '-vsync 0' and don't set a framerate, it should output every frame.
[23:00:56 CEST] <kepstin> if the video is vfr, you won't be able to put it back together again, of course.
[23:06:23 CEST] <lungaro> trying to pass -headers with the http output protocol, It does not seem to be working -- is there some special syntax?
[23:06:33 CEST] <lungaro> I can't seem to find documentation or examples on -method PUT and -headers
[23:08:01 CEST] <kepstin> lungaro: are you trying to set more than one header?
[23:08:05 CEST] <fauxton> furq: alright, I got it to run the for loop, I am getting some "invalid headers" about every 10th file or so but the files that it glitches on play fine in windows media player
[23:08:30 CEST] <kepstin> note that the 'headers' parameter takes a literal string of http headers, they have to be pre-encoded and have \r\n line separators
[23:08:30 CEST] <lungaro> kepstin, nope, just a single value like 'X-Whatever: True"
[23:08:37 CEST] <lungaro> oh interesting
[23:08:54 CEST] <lungaro> so I put \r\n in (like escaped) or literal bytes \r\n ?
[23:08:55 CEST] <kepstin> hmm, that should work with just one header (although it'll print a warning and add the missing newline)
[23:10:11 CEST] <kepstin> lungaro: would probably have to see your full command line and any error output to debug further
[23:10:31 CEST] <fauxton> kepstin: furq can I run multiple instances of this at once
[23:10:51 CEST] <lungaro> kepstin, k, thanks
[23:11:30 CEST] <kepstin> fauxton: I don't see why not, as long as you're writing to different output files and have enough cpu power to run the encoders...
[23:15:24 CEST] <fauxton> kepstin: https://www.screencast.com/t/y1JMwU5IwfiB
[23:17:33 CEST] <Diag> fauxton nice Photoshop
[23:19:48 CEST] <lungaro> i dont get what that is -- someone is photoshoping task manager?
[23:22:57 CEST] <fauxton> Diag: haters will say it's photoshop https://www.screencast.com/t/BYivPTk5G
[23:23:20 CEST] <fauxton> lungaro: jealous of all my threads!
[23:26:28 CEST] <lungaro> kepstin, alright, i have some debug. I was messing around hoping I could figure it out but I can't -- https://nopaste.me/view/b1bd8495#Xra1IVSUU1aKgwrADHTcsFslOaFuVoLP
[23:26:34 CEST] <lungaro> i'm fairly confused by the output here
[23:27:48 CEST] Action: kepstin "only" has 16 threads on his new ryzen box :)
[23:28:56 CEST] <kepstin> lungaro: HLS is incompatible with the http output protocol
[23:29:31 CEST] <kepstin> well, with the options used there anyways
[23:29:35 CEST] <lungaro> because of the rename ?
[23:29:47 CEST] <kepstin> it's designed for use with local filesystem stuff where it can rename files, yeah
[23:30:11 CEST] <kepstin> it *might* work with -hls_flags -temp_file
[23:30:35 CEST] <kepstin> but this is not the sort of thing that gets tested regularly :)
[23:30:41 CEST] <fauxton> welp, the disk is the bottleneck. It's all on a WD purple 4TB which is normally fast but it seems to hate small files
[23:30:58 CEST] <lungaro> bummer if there isn't a way to work w/o rename. I see its doing chunked transfer encoding at the http level, so it knows hwo to stream
[23:32:09 CEST] <lungaro> The server is getting requests so its attempting to upload data, the requests just dont have the header I am providing...
[23:32:11 CEST] <kepstin> lungaro: like I said, try "-hls_flags -temp_file" - but note this could also cause issues because someone trying to play the stream on the other end might see an incomplete playlist file
[23:32:50 CEST] <kepstin> lungaro: the hls stuff is intended to be run on the local disk on the same machine as the http server.
[23:33:35 CEST] <kepstin> I suppose if the web server implements atomic updates via PUT on its end it would be ok
[23:34:02 CEST] <lungaro> yah, the lack of the header is whats killing me
[23:34:06 CEST] <fauxton> kepstin: furq thanks for all the help guys I am going to let it run for a few hours and come back and see
[23:35:52 CEST] <kepstin> lungaro: it's all a bit tricky because the hls "muxer" has to write multiple files, and has to open each separately, it doesn't only use the single output file ffmpeg expects.
[23:35:55 CEST] <kepstin> so, yeah.
[23:37:11 CEST] <kepstin> it looks like it would require code changes to the hls muxer to have it pass the '-headers' option to the underlying protocols it uses
[23:37:28 CEST] <kepstin> it has special support for passing through the -method argument, but not -headers
[23:37:40 CEST] <kepstin> (it actually defaults the method to 'PUT' as well)
[23:44:04 CEST] <Coco> Hoping someone can point me in the right direction. Do any tools currently exist that allow you to extract rgb/luma values for each frame of video?
[23:44:41 CEST] <lungaro> kepstin, where did you see this? In the code?
[23:47:38 CEST] <lungaro> looks like it passes them https://github.com/FFmpeg/FFmpeg/blob/release/3.2/libavformat/http.c#L391
[23:47:40 CEST] <cryptodechange> has the presets changed from this at all?
[23:47:42 CEST] <cryptodechange> https://superuser.com/questions/564402/explanation-of-x264-tune
[23:47:53 CEST] <kepstin> Yeah, I was inspecting the HLS muxer code
[23:47:56 CEST] <cryptodechange> Trying to find decent documentation on what the presets do
[23:48:08 CEST] <kepstin> HLS muxer, not HTTP protocol
[23:48:27 CEST] <lungaro> ah
[23:52:37 CEST] <kepstin> cryptodechange: you generally don't need to know how the presets work, it's just speed vs efficiency trade-off. tunes on the other hand...
[23:52:53 CEST] <furq> cryptodechange: http://dev.beandog.org/x264_preset_reference.html
[23:56:40 CEST] <kepstin> Also 'x264 --fullhelp' expands the presets and tunes.
[00:00:00 CEST] --- Thu Jul 20 2017
More information about the Ffmpeg-devel-irc
mailing list