[Ffmpeg-devel-irc] ffmpeg.log.20180906

burek burek021 at gmail.com
Fri Sep 7 03:05:01 EEST 2018


[00:44:49 CEST] <juny_> hi anyone here?
[01:08:00 CEST] <Hello71> no
[01:31:25 CEST] <ahoo> are there popular ports/builds that don't suck and are maintained well?
[01:31:44 CEST] <ahoo> s/ports/forks
[01:38:30 CEST] <ahoo> i'm thinking about creating a new, UWC cross-platform ffmpeg GUI that is easy to maintain when new features are being added.
[01:39:08 CEST] <ahoo> but i'd need at least 1 person as a partner for the whole thing.
[01:58:27 CEST] <poutine> ahoo, do you have any examples of monumental tasks you've undertaken in the past that you've been successful at solving?
[02:00:29 CEST] <ahoo> not really. what i do (also at work) is solve problems.
[02:01:00 CEST] <ahoo> however i achieve it, i can solve it.
[02:01:25 CEST] <ahoo> as a first monumental task i could create a github project.
[02:01:35 CEST] <ahoo> and add you as a contributor :)
[02:26:30 CEST] <juny_> i am using connect https://trac.ffmpeg.org/wiki/Concatenate. It seems to be that I can't connect 2 mp3 binary together, right? I have to write them to files first?
[02:29:38 CEST] <juny_> and, is it better to join a bunch of audio files by providing a list of paths or join them one by one along with generating each audio
[02:30:31 CEST] <juny_> case i) generate one audio > write in a file > keep doing that until all done > ffmpeg to concatenate all audios recorded in the file
[02:31:28 CEST] <juny_> case ii) after generating 2 audios > concatenate > generate the next audio > concatenate with the audio that I have had
[02:31:48 CEST] <juny_> which one is a better way in term of speed and resource usage
[03:52:45 CEST] <ahoo> none, it doesn't matter.
[03:53:02 CEST] <ahoo> internally, ffmpeg does the same magic whether yo use case i or ii
[03:53:26 CEST] <ahoo> arrrrrrrrrr
[09:05:56 CEST] <fling> Which filter is for dark frame subtraction?
[10:36:40 CEST] <Zexaron> What do you guys think about that https://twitter.com/ID_AA_Carmack/status/1033468473685495811
[10:43:59 CEST] <JEEB> Zexaron: I replied to him and got a like
[10:44:11 CEST] <Zexaron> hehe, right now?
[10:44:14 CEST] <JEEB> the swscale based scale filter doesn't do it right, but if you use zscale that utilizes the zimg library
[10:44:19 CEST] <JEEB> no, when he posted that
[10:44:24 CEST] <Zexaron> oh
[10:45:30 CEST] <JEEB> lorf, can't link my reply since I'm not logged in here. but basically I pointed towards zimg/zscale
[10:45:42 CEST] <JEEB> since swscale was designed after all ages ago when CPU cycles were much more important
[10:45:52 CEST] <JEEB> and nowadays many people just don't want to touch swscale :P
[10:46:04 CEST] <JEEB> which is why we have the colormatrix filter in libavfilter for example
[10:46:11 CEST] <JEEB> instead of having that in swscale :D
[10:48:26 CEST] <Zexaron> Oh no worries I found it already
[10:49:49 CEST] <Zexaron> Well many months (2years ago maybe) ago JCarmack said if we had all the monitors color calibrated and 16 bit color with 18 bit LUT or what he said, we wouldn't have all these gamma issues
[10:51:15 CEST] <Zexaron> certainly when WCG and HDR stuff will become mainline, things will have to start improving for this, this whole pipeline problem i think is something VESA could even do if it gets enough pressure from just the folks like JCarmack
[10:52:03 CEST] <Zexaron> Why can't they figure out some way to calibrate the monitors better in factory, maybe new panel tech will allow or even alleviate the need for that
[10:53:23 CEST] <Zexaron> With VR videos, when everything a lot more noticable by even general public, hopefully everyone will be more serious on these inaccuracies in general
[11:56:45 CEST] <th3_v0ice> How can I force the format of the input file while opening it with avformat_open_input()?
[14:16:33 CEST] <DHE> th3_v0ice: umm... specify the AVInputFormat of your choosing to it?
[14:35:48 CEST] <psyb0t> hi guys! i've got an issue here.. i'm watching about 16 rtsp streams on 16 different ffmpeg processes. each process does segmented output, writing a file every minute plus outputting jpg frames of different sizes. the processes get stuck at [mp4 @ 0x32f3000] Starting second pass: moving the moov atom to the beginning of the file. yes i'm using faststart. anyone encountered this?
[14:39:55 CEST] <furq> faststart rewrites the entire file
[14:39:57 CEST] <DHE> well that will involve some disk IO as the .mp4 file written thus far is rebuilt
[14:40:06 CEST] <furq> so if you're writing 16 files at once to spinning disks then yeah that's going to stall
[14:40:50 CEST] <DHE> I've been wanting to make a tempfile option where the original version could be saved elsewhere, like a ramdisk or maybe just buffered in RAM (at the user's risk)
[14:42:30 CEST] <pagios> hello all, i am using Nginx for livestreaming the cost of b/w per user is around 3mbits/sec. I have 4 NICS per machine with 10Gigabit of throughput per network interface, the server has 128GB of RAM, any idea how many users i can accomodate in this setup? streaming is on local network so b/w throughput is not an issue
[14:43:57 CEST] <DHE> well the math says around 12,000 if you can use all 4 NICs, LACP maybe?
[14:44:17 CEST] <DHE> nginx does have multi-CPU options but you must enable it
[14:44:33 CEST] <pagios> DHE: encoding is not done on this machine it is only serving the stream
[14:44:41 CEST] <pagios> the videos are in full hd
[14:44:43 CEST] <furq> if this is hls then don't forget to accunt for the ~10% mpegts overhead
[14:44:47 CEST] <furq> o
[14:45:09 CEST] <pagios> furq, if it is rtmp no disk is used?
[14:45:10 CEST] <DHE> yeah, but 10gig is still going to hit the CPU hard. I would not expect a single CPU core to handle 40 gigabit and even 10 is pushing it
[14:45:43 CEST] <pagios> DHE, how would you design it
[14:45:46 CEST] <furq> pagios: i mean bandwidth overhead
[14:45:57 CEST] <furq> but rtmp uses flv which has a similar amount of overhead iirc
[14:45:59 CEST] <furq> maybe a bit less
[14:46:12 CEST] <DHE> mpegts overhead is a bit crazy, yes
[14:47:07 CEST] <furq> there was some bug with nginx-rtmp deadlocking itself with worker_processes > 1
[14:47:11 CEST] <furq> hopefully they fixed that already
[14:47:27 CEST] <pagios> but rtmp does not use disk
[14:47:31 CEST] <pagios> hls is writing to disk so you have IO
[14:47:44 CEST] <pagios> right?
[14:47:55 CEST] <furq> i take it this is a lot of different streams
[14:48:19 CEST] <pagios> ?
[14:48:46 CEST] <pagios> i didnt get you
[14:48:57 CEST] <furq> if you're just serving one stream to multiple clients then disk io isn't really going to be noticeable
[14:49:22 CEST] <pagios> furq, uea they are all reading from the same stream
[14:49:26 CEST] <pagios> so its readonly disk io
[14:49:35 CEST] <pagios> which is cheap right?
[14:49:47 CEST] <furq> yeah plus i'd expect it mostly to be cached anyway
[14:49:53 CEST] <furq> so rtmp or hls probably doesn't matter
[14:49:57 CEST] <pagios> hls can be cached?
[14:49:59 CEST] <furq> the main issue you'd see with hls is latency
[14:50:00 CEST] <pagios> ok
[14:50:13 CEST] <furq> i mean it'll all be in one of the many layers of io caches
[14:50:13 CEST] <pagios> so rtmp and hls are cached by the operating system internals
[14:50:18 CEST] <pagios> ok
[14:50:18 CEST] <furq> at least the disk cache if not even faster
[14:50:27 CEST] <pagios> better use SSD though
[14:50:30 CEST] <furq> given that most people will be reading the same chunk at any given time
[14:50:39 CEST] <DHE> or if there's not a lot of streams involved, maybe a ramdisk?
[14:51:06 CEST] <DHE> at 128 GB of RAM you can probably afford to leave 2 or 4 gigabytes to a cache for a small number of streams
[14:51:06 CEST] <pagios> where is the invovlemtn of CPU ?
[14:51:08 CEST] <furq> ramdisk is probably overkill but you've got plenty of memory for it
[14:51:18 CEST] <furq> and it guarantees you're not going to run out of io
[14:51:32 CEST] <pagios> an hd stream is around 2mbits / sec
[14:52:15 CEST] <pagios> hmm ok so the main stopper here is the network card?
[14:52:18 CEST] <DHE> pagios: the handling of 10 gigabits of networking and protocol parsing is where the CPU goes.
[14:52:41 CEST] <pagios> whats a good cpu that would serve well that?
[14:52:51 CEST] <DHE> I can't actually comment on the efficiency of nginx-rtmp but when you configure it make sure to set the number of workers to the number of cores/threads
[14:53:02 CEST] <DHE> because the default is 1 if unset
[14:53:21 CEST] <furq> apparently the workaround for the deadlock i mentioned is `rtmp_auto_push on`
[14:53:40 CEST] <pagios> DHE, you think its ok to design the system so that a good system handles 10,000 streams concurrently?
[14:53:57 CEST] <pagios> will a good system handle the 10,000 hd stream?
[14:54:03 CEST] <pagios> 10,000 is a big number
[14:54:25 CEST] <furq> 2500 per core doesn't seem that excessive
[14:54:29 CEST] <furq> especially with dedicated nics
[14:55:16 CEST] <furq> i'm just speculating though honestly
[14:55:37 CEST] <DHE> I'm only using nginx for HTTP-based content delivery. but I estimated that my dual-socket 8-core HT CPUs could hit 100 gigabit before the system maxes out (that includes non-nginx loads as well)
[14:55:41 CEST] <furq> i would expect you to run out of bandwidth before anything else
[14:56:18 CEST] <pagios> DHE, furq whats a good CPU to rely on?
[14:56:20 CEST] <pagios> i7?
[14:56:48 CEST] <DHE> not going xeon I see...
[14:56:55 CEST] <pagios> i7 better no?
[14:57:00 CEST] <pagios> i need the best
[14:57:50 CEST] <DHE> xeon is a huge line of CPUs from Intel and largely just means "Server-grade". you can get 35watt quad cores at 3.5 GHz all the way up to 22 core 145watt Xeons. and that's just looking at previous-gen stuff
[14:58:05 CEST] <DHE> and yes I do have some of the 145 watt chips
[14:58:56 CEST] <furq> xeons and i7s are usually basically the same parts iirc
[14:59:06 CEST] <furq> i7s have onboard gfx and xeons have ecc support and sometimes extra L3
[14:59:07 CEST] <DHE> unfortunately I can't quantify the nginx requirements. I suggest trying it. Run 100 sessions on a single worker system and see how much CPU nginx needs.
[14:59:09 CEST] <pagios> furq, i7 is way cheaper i guess?
[14:59:12 CEST] <furq> that's often the only difference afaik
[14:59:27 CEST] <furq> and yeah you'll pay a premium for a xeon but not that much
[14:59:36 CEST] <furq> if you're not running ECC then i'd say it's not worth it for this workload
[14:59:40 CEST] <pagios> will GPUs help?
[14:59:45 CEST] <furq> nope
[14:59:45 CEST] <DHE> not for nginx
[14:59:48 CEST] <pagios> ok
[14:59:54 CEST] <DHE> GPUs might help with encoding, but I'm assuming that's covered
[15:00:03 CEST] <furq> it would potentially help for reencoding lower bandwidth streams
[15:00:10 CEST] <furq> but that's about it
[15:00:18 CEST] <furq> (and you'd be better off doing that on cpu anyway if possible)
[15:00:53 CEST] <pagios> ok
[15:01:43 CEST] <DHE> I'm going to guess that a current corei7 can handle 10-15 gigabit minimum, and hope to be pleasantly surprised it could handle 40gig. but you're going to need to test it.
[15:02:04 CEST] <pagios> you mean 1 core i7 can handle 10gbits
[15:02:14 CEST] <pagios> so 4 core i7 handle 40gbits
[15:02:30 CEST] <furq> i mean i know people with i7s who have a 10G home network and they've never mentioned anything about it hitting their cpu noticeably
[15:02:37 CEST] <furq> granted they're not serving 2500 streams on it
[15:03:03 CEST] <pagios> so say 16 cores i7
[15:03:07 CEST] <furq> but yeah it's not a matter of throughput, it's the connection overhead
[15:03:13 CEST] <furq> so it's impossible to really give a useful estimate
[15:03:14 CEST] <DHE> pagios: I mean the whole CPU... I'm trying to extrapolate from my one known system's limits and assuming RTMP and HTTP overhead are identical
[15:03:45 CEST] <DHE> if my server's theoretical maximum is 100 gig, and a corei7 has 4 cores with hyperthreading but a higher clock, what can I expect for its limit?
[15:03:54 CEST] <furq> like DHE said, the best thing is to benchmark the setup you'll be using as best you can
[16:55:18 CEST] <barhom> I have some files that have an audio stream that is empty, 0 channels. Is it possible to force "-c copy" to include this empty audio stream as well?
[16:57:00 CEST] <kepstin> barhom: i had no idea that was possible. what does ffmpeg show when using that file as an input?
[16:57:15 CEST] <barhom> [mpegts @ 0x5562ee8b7600] Could not find codec parameters for stream 1 (Audio: mp3 ([4][0][0][0] / 0x0004), 0 channels): unspecified frame size
[16:57:15 CEST] <barhom> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[16:58:01 CEST] <barhom> The input source is from satellite, VLC shows the audio stream normally, but of course plays no sound since there isnt any
[16:58:05 CEST] <kepstin> barhom: i'd suspect that the stream was misdetected rather than actually has 0 channels
[16:58:34 CEST] <barhom> kepstin: Well, I think its misdetected BECAUSE it has 0 channels ;)
[16:58:54 CEST] <BtbN> it's probably not an mp3 stream
[16:59:02 CEST] <BtbN> the mp3 auto-detection is just very trigger-happy
[16:59:14 CEST] <kepstin> mp3 doesn't have any way to specify 0 channels
[16:59:21 CEST] <kepstin> it's misdetected, yeah
[16:59:29 CEST] <BtbN> Can you upload a sample?
[16:59:37 CEST] <barhom> BtbN: sure, one sec
[17:01:36 CEST] <barhom> https://drive.google.com/file/d/1Nb7raHt2Rc7nqzZMV6OjWDOSduwIkTad/view?usp=sharing
[17:01:53 CEST] <barhom> That input is from a live source from satellite
[17:01:58 CEST] <barhom> i.e. an actual broadcaster
[17:02:08 CEST] <barhom> the input file has two streams (video, audio). The audio is empty though.
[17:03:07 CEST] <barhom> I need my output to be the same amount of streams as my input (for various reasons), this is why I would like to be able to "-c copy" this input and keep whatever the broadcasters audio stream was
[17:03:19 CEST] <barhom> but Im not sure that is possible, you guys tell me
[17:06:15 CEST] <BtbN> The file looks corrupted to me, and somehow the auto detection thinks it's mp3
[17:06:50 CEST] <kepstin> it's probably not that the audio is empty, but rather that nothing can play it
[17:07:06 CEST] <barhom> BtbN: You are not wrong, it is probably corrupted. But this is how broadcasters send audio sometimes
[17:07:14 CEST] <BtbN> I don't even think it's audio
[17:07:26 CEST] <BtbN> it's probably some data stream, that somehow gets detected as mp3
[17:08:26 CEST] <barhom> dvblastctl -r /tmp/dvblast-84.sock get_pmt 4304
[17:08:26 CEST] <barhom> new PMT program=4304 version=28 pcrpid=4308
[17:08:27 CEST] <barhom>   * ES pid=4308 streamtype=0x02 streamtype_txt="13818-2 video (MPEG-2)"
[17:08:28 CEST] <barhom>   * ES pid=4309 streamtype=0x04 streamtype_txt="13818-3 audio (MPEG-2)"
[17:08:29 CEST] <barhom> end PMT
[17:08:46 CEST] <barhom> The PMT specifically shows it as streamtype 0x02 mpeg2 audio
[17:10:17 CEST] <BtbN> Can you open a bug on trac about it? If not, I'll do it later if I remember, but can't right now.
[17:10:38 CEST] <BtbN> Just attach what ffprobe/ffmpeg -i has to say about the file, and the file itself, and that it's misdetected
[17:11:14 CEST] <barhom> BtbN: Sure, I'll do that.
[17:11:55 CEST] <trashPanda_> Can anyone point me in the direction of code equivalents to the -re and -pkt_size cli arguments?
[17:12:48 CEST] <BtbN> re is in ffmpeg.c somewhere. All it does is return EAGAIN from some function while the pts has advanced less than the realtime
[17:15:37 CEST] <trashPanda_> Thank you, any idea for pkt_size?
[17:16:04 CEST] <kepstin> trashPanda_: that's probably an avoption for a particular muxer?
[17:16:42 CEST] <trashPanda_> I was using it to limit the packet size via DHE's advice yesterday.  The stream would not load in VLC without it
[17:17:00 CEST] <trashPanda_> mpegts stream
[17:18:36 CEST] <trashPanda_> Would I send that in an AVDictionary into the avformat_write_header call?
[17:18:41 CEST] <kepstin> oh, on the udp protocol then. Huh, i haven't looked at how to set options on protocols :)
[17:20:16 CEST] <kepstin> Probably the easiest way is to just put it into the output filename, something like udp://address:port?pkg_size=1234
[17:20:26 CEST] <kepstin> except spell the option correctly :)
[17:21:04 CEST] <trashPanda_> Interesting, Ok I'll try that thank you
[17:31:28 CEST] <DHE> mpegts over UDP requires a pkt_size that is a multiple of 188. 1316 is usually selected for MTU reasons
[17:31:51 CEST] <DHE> feel free to use 188 if you want to piss off the network admin. :)
[17:32:20 CEST] <BtbN> I wonder if something notably larger than the MTU wouldn't be better
[17:32:31 CEST] <BtbN> the network fragmentation should be rather fast
[17:33:12 CEST] <DHE> I guess the risk is that a dropped packet causes additional losses since the whole large packet must be dropped...
[17:33:23 CEST] <DHE> also network fragmentation is not a reliable thing when routers are involved unfortunately
[17:35:49 CEST] <DHE> actually never mind. one of the reasons for not fragmenting is bandwidth management. large packets result in large bursts and we know a big part of UDP mpegts is output rate management...
[17:36:11 CEST] <DHE> stupid switches and their pathetically small packet buffers
[17:49:09 CEST] <trashPanda_> Thank you Kepstin, that worked perfectly
[18:17:59 CEST] <barhom> DHE: Oh man Ive had problems with microbursts in my network causing UDP packets with 1316 getting dropped
[18:19:16 CEST] <barhom> Would start getting TS discontinuities on my input UDP streams because of some 10gbps links would go above 10gbps enough time to fill the buffers of the switches
[18:19:31 CEST] <barhom> even thought the graphs showed 5-6gbps throughput on the 10g link
[18:54:01 CEST] <DHE> barhom: when the 24port switches had 2.5 megabytes of packet buffer to be shared across all ports and the group of 100 channels always sent their packets at the same time (100 rapid sequential packets) yeah it was a problem.
[19:26:07 CEST] <poutine> We receive a lot of caption files that start ~ 1 hour in, and sometimes with mp4/mov/mpg files that may or may not have a starting timestamp. They don't play with VLC (since the captions don't start until 1 hour in). Is it normal to require content partners to align those at 0 before you receive them if you're rebroadcasting? I see instances where it matches up with the video start time code, but many where it does not, and I would think requiring
[19:26:07 CEST] <poutine> they start at 0 would fix this, but not sure if it's an industry standard practice that those should just be handled right and re-shifted
[19:34:39 CEST] <ChocolateArmpits> poutine, maybe it's their software placing the starting timecodes for captions starting with hour one? I've read that this way the timecode denotes "1st hour of the programme".
[19:36:39 CEST] <ChocolateArmpits> lemme open up a book on broadcasting stuff, maybe it has something on that
[19:38:50 CEST] <ChocolateArmpits> oh and starting with "hour one" also allows to have prerolls, like a test signal, as negative timecode is really uncommon
[19:43:00 CEST] <ChocolateArmpits> that's probably a more important reason to have the time start at hour one, analog media would need some levels adjustment before playback so having a test signal there would help
[19:56:40 CEST] <poutine> ChocolateArmpits, I understand the original rationale for starting 1 hour in, but if we get no pre-slates as per requirement, and it's something that wouldn't work in off-the-shelf media products like VLC, we could request that they fix it, as it's just been my experience that just subtracting the video start timecode often results in descyned captions, but it's entirely possible I'm doing something wrong with shifting
[19:57:22 CEST] <poutine> just wondered if any of this sounded "normal" as it's not a large company and I don't want to make us look bad by asking for changed files if I'm doing something wrong/obvious
[19:57:59 CEST] <ChocolateArmpits> well the shift doesn't sound normal
[19:58:01 CEST] <poutine> also realize I've veered from ffmpeg a bit with this question and am more asking about industry standards with rebroadcast
[20:58:00 CEST] <barhom> DHE: My 5000$ juniper ex4550 switch has like 4mb of buffer, really sucks
[20:58:12 CEST] <barhom> in the future time to buy deep buffer switches to connect to bursty sources
[21:19:34 CEST] <trashPanda_> Can someone explain the AVCodecContext::bit_rate_tolerance desription, "the reference can be CBR or VBR"?  Are those enums?
[21:20:04 CEST] <trashPanda_> I'm using an encoder AVCodecContext
[21:31:32 CEST] <trashPanda_> or I guess another question is, how to set a constant encoder bitrate.  Setting the bit_rate field changes the average bitrate, not the max
[21:33:44 CEST] <kepstin> do you actually need CBR tho? That's really for only special cases like tv broadcast where you need to fill exactly x bits per second
[21:34:42 CEST] <trashPanda_> I need to limit the max because the networks I send over sometimes can't support higher bandwidths
[21:35:09 CEST] <trashPanda_> And I was hoping for a better way than lowering the average until the maximum fluctuations "fit" under my ceiling
[21:35:45 CEST] <furq> trashPanda_: rc_max_rate
[21:35:48 CEST] <kepstin> ok, so you don't want cbr. in this case you should be using a constrained vbr mode, configured with the rc_buffer_size and rc_max_rate fields
[21:37:57 CEST] <trashPanda_> max_rate is my ceiling, what does buffer size reflect?
[21:39:34 CEST] <kepstin> the amount of data that the player is expected to buffer
[21:40:26 CEST] <trashPanda_> in practice does that translate to the maximum or average?
[21:41:20 CEST] <kepstin> when bufsize and maxrate are set, a system with an internet connection capable of transmitting maxrate bit/s, and having at least bufsize memory locally, will be able to play the video continuously.
[21:41:35 CEST] <kepstin> also remember that bitrate = size / time
[21:41:52 CEST] <trashPanda_> thank you
[21:41:53 CEST] <kepstin> you can use that with the maxrate and bufsize to figure out how much delay (time) the buffer corresponds to
[21:42:55 CEST] <kepstin> (don't forget to compensate for additional tracks and container overhead when determining maxrate, of course)
[21:47:27 CEST] <trashPanda_> when you say delay the buffer corresponds to, you mean how much video will fit inside it (like 2 frames etc.)?
[21:49:03 CEST] <kepstin> well, measured in time not frames (although with constant fps video you can convert to frames easily enough)
[21:49:44 CEST] <poutine> ChocolateArmpits, Curious what book you reference on broadcast, mind sharing?
[21:49:46 CEST] <kepstin> the encoder will usually do something like encode a keyframe with lots of bits (such that the instantaneous bitrate would exceed the maxrate) but then follow it with a bunch of little predicted frames
[21:49:56 CEST] <poutine> not the book itself, just the name of it
[21:51:50 CEST] <trashPanda_> thanks for that.  If I set the max bitrate, should I still expect the output bitrate to exceed that number?
[21:52:19 CEST] <ChocolateArmpits> poutine, have a few on the drive but the only one having any mention of the timecode was "Broadcast Engineer's Reference Book". Instead of 01:00:00:00 it describes 10:00:00:00 which is more common in UK and Europe
[21:52:49 CEST] <ChocolateArmpits> it's a single paragraph mention, telling same thing that I wrote earlier
[21:53:08 CEST] <ChocolateArmpits> "The value of the timecode that is recorded at the start of the
[21:53:08 CEST] <ChocolateArmpits> tape is arbitrary, but a common practice is to start at
[21:53:09 CEST] <ChocolateArmpits> 09:57:00:00. With 2 minutes of colour bars and a minute of
[21:53:09 CEST] <ChocolateArmpits> black the programme will start at 10:00:00:00. This is a convenient
[21:53:09 CEST] <ChocolateArmpits> round number that allows the programme duration to be
[21:53:09 CEST] <ChocolateArmpits> determined very easily."
[21:53:48 CEST] <kepstin> trashPanda_: on average over a time period longer than the length of the buffer, it shouldn't be exceeding the maxrate.
[21:54:33 CEST] <trashPanda_> ok got it, is there a special flag I have to set to allow the use of those fields?
[21:54:38 CEST] <kepstin> trashPanda_: and if you transmit the video at the maxrate (plus extra for muxing overhead, etc), it'll play smoothly but with a delay equivalent to the buffer size
[21:54:50 CEST] <kepstin> trashPanda_: if the fields are set (non-zero, iirc), they're used.
[22:00:19 CEST] <trashPanda_> kepstin, if I set the max bitrate and find my outgoing bitrate much higher, what could be a reason?  Do I need to set average bitrate lower than my max (rather than leaving unset)?
[22:00:45 CEST] <kepstin> same as max should be fine. Did you allow for container muxing overhead and other streams?
[22:00:51 CEST] <kepstin> also, what codec are you using?
[22:01:01 CEST] <trashPanda_> hevc_nvenc
[22:01:12 CEST] <kepstin> oh, it might just not be wired up on nvenc
[22:01:37 CEST] <kepstin> and x265 you need to use codec-specific options i think, that might not be wired up to the generic stuff
[22:01:42 CEST] <trashPanda_> I can lower the average bitrate just fine to achieve the same "result"
[22:02:01 CEST] <trashPanda_> Oh.. where can I find a list of those specific options?
[22:02:36 CEST] <kepstin> easiest way to get a summary of codec-specific options is to run "ffmpeg -h encoder=encodername"
[22:03:03 CEST] <trashPanda_> do i then pass them into the encoder creation with an AVDictionary?
[22:03:18 CEST] <kepstin> with nvenc, you might want to try playing around with the "rc" option values
[22:03:49 CEST] <kepstin> there's a few ways to initialize codec options, i think that's one of them yeah
[22:05:33 CEST] <trashPanda_> what's your preferred version?
[22:06:03 CEST] <kepstin> i haven't built any apps using the encoder api directly, so :/
[22:06:20 CEST] <trashPanda_> ok, thanks for your help
[22:11:10 CEST] <kepstin> if you're just setting single options, it might make sense to use av_opt_set() instead, but it has to be called at the correct time (iirc, after initializing the context, but before opening it)
[22:12:05 CEST] <kepstin> but better double-check that, i haven't looked at this api recently :)
[22:13:21 CEST] <trashPanda_> Thats what Im currently using, I'm unsure if it can be used to set more than one option lol
[22:13:25 CEST] <kepstin> yeah, after allocating, before opening.
[22:13:35 CEST] <kepstin> you can call it multiple times for multiple options
[22:14:43 CEST] <trashPanda_> ok, and you send in encodercontext->priv_data?
[22:15:06 CEST] <trashPanda_> I saw an example do that, so double checking
[22:16:20 CEST] <kepstin> oh, I got that wrong, the priv data has to be allocated before the otpions can be set, right.
[22:16:28 CEST] <kepstin> and normally that's not done until open...
[22:16:42 CEST] <kepstin> maybe just use the avdictionary, that's set in the right place :)
[22:17:06 CEST] <trashPanda_> where are you reading this?
[22:17:09 CEST] <kepstin> like i said, I haven't looked at this api recently, tho
[22:17:13 CEST] <kepstin> so :/
[22:18:00 CEST] <trashPanda_> I would be more than happy to read myself and stop bugging you guys so much lol
[22:18:31 CEST] <kepstin> honestly? just reading the source code :/
[22:18:58 CEST] <trashPanda_> you can tell what the bitrate fields do by reading the source code? lol
[22:19:06 CEST] <trashPanda_> I need to work on that skill
[22:19:27 CEST] <kepstin> well, not the bitrate fields, i was trying to figure out when options could be applied to an avcodeccontext :)
[22:19:55 CEST] <kepstin> (I grepped the source to remember what the internal names of the maxrate and bufsize options were tho)
[22:22:05 CEST] <kepstin> alright, if you call avcodec_alloc_context3() then avcodec_get_context_defaults3(), then after that you can set arbitrary avoptions using av_opt_set(), and then call avcodec_open2()
[22:23:05 CEST] <kepstin> and you should never even think about touching priv_data unless you're writing code inside the ffmpeg codec itself :)
[22:24:31 CEST] <trashPanda_> you mean outside of sending priv_data into the av_opt_set(), don't touch it
[22:26:46 CEST] <kepstin> it should work to pass the avcodeccontext to av_opt_set()
[22:26:58 CEST] <trashPanda_> ok, cool
[22:27:54 CEST] <kepstin> hmm. the example is using priv_data tho, you're right
[22:28:22 CEST] <trashPanda_> it "seems" to work fine with priv_data
[22:28:27 CEST] <trashPanda_> I have no way of verifying though lol
[22:30:08 CEST] <kepstin> yeah, the priv_data will work for setting codec-specific options
[22:30:49 CEST] <kepstin> if you use set_opt on the context, it lets you use avoptions to set fields in the context, fwiw. I just forget if the child classes are chained properly so that also sets codec private options. I thought it did...
[22:37:52 CEST] <kepstin> fwiw, ffmpeg.c (the cli utility) just passes the options in a dictionary to avcodec_open2.
[22:40:18 CEST] <trashPanda_> oh really? thats incredibly useful
[22:41:12 CEST] <trashPanda_> thank you
[23:21:23 CEST] <juny> hi
[23:22:38 CEST] <juny> i wonder what -acodec flag is. It is not mentioned in concat doc: https://trac.ffmpeg.org/wiki/Concatenate but it is shown in examples on StackOverflow questions
[23:23:45 CEST] <juny> ffmpeg -i "concat:file1.mp3|file2.mp3" -acodec copy -metadata "title=Some Song" test.mp3 -map_metadata 0:-1
[23:23:58 CEST] <juny> copy the codec from the input audios?
[23:26:03 CEST] <Something1> Yes it copies the audio over from the source material :) I always use -c:a copy though
[23:26:05 CEST] <Naan> is there a way to turn off this warning https://pastebin.com/Cscj7QGV
[23:26:20 CEST] <Naan> I don't care that it can't find codec parameters for the audio stream :@
[23:26:41 CEST] <juny> -c:a is enough? on the doc, it has -codec:a ?
[23:26:42 CEST] <Naan> using a python wrapper around this https://github.com/NVIDIA/nvvl to load videos into arrays
[23:26:50 CEST] <juny> https://ffmpeg.org/ffmpeg.html
[23:26:52 CEST] <Naan> I'm not using the command line
[23:27:59 CEST] <Something1> @Naan, ffmpeg -loglevel panic?
[23:29:19 CEST] <Naan> damn I don't even have ffmpeg installed
[23:30:02 CEST] <Naan> https://github.com/NVIDIA/nvvl the wrapper i'm using is using the binary from there which only needs "FFmpeg's libavformat, libavcodec, libavfilter, and libavutil"
[23:30:33 CEST] <Naan> i'm guessing if i install ffmpeg it will let me interface with those
[23:30:37 CEST] <Naan> i'll give it a shot
[23:30:51 CEST] <Something1> I have no idea what you´re doing or talking about, Naan :)
[23:30:59 CEST] <Naan> :(
[23:31:29 CEST] <Something1> How do you get ffmpeg results without having ffmpeg in any way?
[23:32:21 CEST] <Naan> I mean I didn't realise ffmpeg was this behemoth collection of libraries and programs
[23:32:45 CEST] <Naan> so I have the ones I need installed I just didn't have the application ffmpeg installed
[23:34:49 CEST] <Something1> Ah, I see. Then see if you can define it inside of that nvvl, or pipe that output out to /dev/null
[23:42:22 CEST] <Naan> thanks for some reason setting loglevel as none doesn't make a difference and it doesn't let you define it as panic
[23:42:29 CEST] <Naan> so i'll try the piping trick
[23:48:19 CEST] <Naan> yeea the piping worked be gone
[23:50:46 CEST] <jdel> does ffmpeg support multi-planar v4l2 devices?
[23:53:16 CEST] <atomnuker> yes, in theory as long as you can get a dmabuf frame everything should work, including opencl
[23:54:47 CEST] <jdel> do I need to make code changes?
[23:55:02 CEST] <jdel> the version i'm looking at only queries for video capture capabilities
[23:55:14 CEST] <jdel> so a multi-planar-only device doesn't register as supported
[23:58:19 CEST] <juny> how to remove ID3 when I concatenate audios
[00:00:00 CEST] --- Fri Sep  7 2018


More information about the Ffmpeg-devel-irc mailing list