[Ffmpeg-devel-irc] ffmpeg.log.20170705
burek
burek021 at gmail.com
Thu Jul 6 03:05:01 EEST 2017
[00:01:36 CEST] <SpeakerToMeat> uuhhhh.... https://pastebin.com/aUSPe62q
[00:20:10 CEST] <SpeakerToMeat> Ok I give up for today, the debian build system is playing tricks on me with the "flavor" hack on the rules file.... I'll keep on trying un thu if I'm alive
[00:21:09 CEST] <furq> i wouldn't bother trying to build a package
[00:21:15 CEST] <furq> just patch git master and install to /usr/local
[00:33:59 CEST] <Yawnny> Hey all.. I'm trying to convert a VFR video to a CFR one.. I'm trying to frankenstein a script together but having mixed results.
[00:34:07 CEST] <Yawnny> My current script: ffmpeg -i (input file).mp4 -c:v dnxhd -b:v 290M -c:a pcm_s16le -r 60 (output).mxf
[00:34:22 CEST] <Yawnny> I was told dnhxd would be the way to go.. and maybe .mov at the end
[00:34:43 CEST] <Yawnny> With my filenames the script looks like: ffmpeg.exe -i ("C:\Users\Yawnny\Desktop\test\encodeThis_withFFMPEG.mp4").mp4 -c:v dnxhd -b:v 290M -c:a pcm_s16le -r 29.97 ("C:\Edit\converted.mov").mov
[00:35:15 CEST] <Yawnny> But then CMD opens and closes immediately..I have a feeling I'm fudging it up with the filenames and whether I need quotes or not, etc...
[00:36:30 CEST] <Yawnny> After I tried handbrake on a long 4 hour video with mixed succsess the advice given to me was: "I'd suggest looking at a fuller FFMPEG encode over handbrake - and encode to dnxhr. Audio at 48k uncompressed please."
[00:36:53 CEST] <Yawnny> So at some point I think I have to insert -af aresample=48000:async=1 from what I've read
[00:46:33 CEST] <furq> who told you this
[01:08:41 CEST] <Yawnny> If someone responded would you mind pasting the reply again? Didn't realize my computer crashed.. had to reboot
[01:09:03 CEST] <Threads> only thing said was [23:46:37] <furq> who told you this
[01:11:06 CEST] <Yawnny> Oh okay. Well in response to that: someone on the adobe premier subreddit
[01:11:57 CEST] <Yawnny> https://www.reddit.com/r/premiere/comments/6koba3/helpaudio_out_of_sync_over_time_after_handbraking/?st=J4Q6V8KR&sh=523ab9c3
[01:12:07 CEST] <Yawnny> That's where I initially brought up my issue
[02:01:04 CEST] <iive> Yawnny: you are not telling us what error you do get
[02:01:40 CEST] <iive> also resampling lowers quality, do it only if necessery.
[04:37:20 CEST] <nicolas17> to get the input format for demuxing, I use av_find_input_format("image2")
[04:37:44 CEST] <nicolas17> but how do I get an AVOutputFormat by name to use when muxing? I can't see an av_find_output_format
[04:41:48 CEST] <nicolas17> looks like that's av_guess_format
[04:41:51 CEST] <nicolas17> o_o
[04:41:59 CEST] <nicolas17> weird name if I want to get an exact match but okay
[04:49:38 CEST] <nicolas17> why does conversion from yuvj422p give this warning from swscaler? "deprecated pixel format used, make sure you did set range correctly"
[10:08:02 CEST] <a__pi> im trying to convert a mp4 to another one with hardware acceleration with profile baseline 3.0. I'm using ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i file -vf 'format=nv12,hwupload,scale_vaapi=w=640:h=360' -map 0:0 -map 0:1 -threads 8 -y -acodec copy -b:v 12500k -vcodec h264_vaapi /tmp/file.mp4 that command. how can i specify the profile in it ? i tried profile 66 as per https://www.reddi
[10:08:08 CEST] <a__pi> t.com/r/linux_gaming/comments/4q5orj/using_vaapis_hardware_accelerated_video_encoding/ but it's failing with Encoding profile not found (5).
[10:21:50 CEST] <Mavrik> why not -profile:v baseline ?
[10:23:49 CEST] <a__pi> Undefined constant or missing '(' in 'baseline' Unable to parse option value "baseline" Error setting option profile to value baseline.
[10:23:57 CEST] <a__pi> it says that with baseline
[10:30:39 CEST] <Mavrik> Hmm, new ffmpeg?
[10:32:36 CEST] <Mavrik> Ahh.
[10:32:44 CEST] <Mavrik> a__pi, if I read this correctly: https://gist.github.com/Brainiarc7/95c9338a737aa36d9bb2931bed379219
[10:32:51 CEST] <Mavrik> You'd have to use -hwaccel_output_format
[10:33:09 CEST] <Mavrik> And use `vainfo` to find out what the right name for it is, e.g. VAProfileH264ConstrainedBaseline
[10:34:29 CEST] <a__pi> Option hwaccel_output_format (select output format used with HW accelerated decoding) cannot be applied to output url
[10:37:18 CEST] <a__pi> moved the option before first file but its ignoring it its generating high
[11:25:01 CEST] <jkqxz> a__pi: I doubt you want baseline; nothing supports it. Try "-profile:v 578" for constrained baseline.
[11:25:34 CEST] <a__pi> i require baseline :) its for and old device
[11:25:38 CEST] <a__pi> an*
[11:25:54 CEST] <a__pi> but i'll try constrained baseline :)
[11:25:57 CEST] <a__pi> thx
[11:27:11 CEST] <jkqxz> You really don't want baseline - nothing supports it at all. (Some old streams do lie about it, though - they are actually constrained baseline with incorrect marking.)
[11:28:13 CEST] <a__pi> its for a projector that max resolution is 640x360....
[11:30:20 CEST] <hojuruku> a__pi, vaapi has big problems with filters, due to some locking, i played with it and gave up
[11:30:37 CEST] <hojuruku> a__pi, it works fine on my old haswell, just make sure you test first with no filters.
[11:31:20 CEST] <a__pi> thx
[11:31:30 CEST] <a__pi> i'll do :)
[11:31:37 CEST] <hojuruku> a__pi, you are on the right track, I'm too overrealmed with my kerberos/ldap setup geek stuff now to dive in and give you the exact syntax, but keep plodding at it. you'll get it.
[11:32:16 CEST] <hojuruku> by filters i mean stuff that modifies the final output - so you can't even select a time duration. I tried very hard to do that, and I saw bug reports etc.. let me see my browser history to give you the details.
[11:33:19 CEST] <hojuruku> https://trac.ffmpeg.org/ticket/5532 a__pi that's the error i was hitting, solution no filters.
[11:35:21 CEST] <hojuruku> a__pi, that's me tweeting about fiddling with it 2 months ago. Oh i'm banned from twitter ;) https://gab.ai/ozzieslovepedos/posts/7802589
[11:35:32 CEST] <jkqxz> Er, what? That bug is incorrect priority in encoders, and was fixed more than a year ago.
[11:36:09 CEST] <jkqxz> If you want software filters on hardware frames then use hwmap, or download/upload them.
[11:37:58 CEST] <hojuruku> a__pi, haswell doesn't support intel qsv, but if you have a newer processor this looks like the GO - the newer intel binary interface to the gpu. https://trac.ffmpeg.org/wiki/HWAccelIntro
[11:38:50 CEST] <hojuruku> jkqxz, i assure you i was getting it when trying to use input filters with vaapi hardware encoding. Under normal use the commandline etc would work, but using vaapi there was some kind of issue.
[11:39:32 CEST] <hojuruku> i'm useless because I haven't played with it for 2 months. All I got is a massive binge in my web browsers history researching ffmpeg. The bug is real and it's in git. Tested on a haswell refresh gpu.
[11:39:42 CEST] <hojuruku> and gentoo with everything up to date and fffmpeg git.
[11:41:34 CEST] <hojuruku> https://lists.ffmpeg.org/pipermail/ffmpeg-user/2016-June/032484.html i was trying to use vaapi in the filter chain it has two ways to invoke it. Neither worked with filters.
[11:42:56 CEST] <hojuruku> vaapi has two modes, one mode for input and or output only and one mode for full vaapi transcoding. https://lists.ffmpeg.org/pipermail/ffmpeg-user/2016-June/032484.html a__pi sorry I wish i could be more help to you. If you got to me sooner i'd have my bash history ;)
[11:47:26 CEST] <jkqxz> VAAPI uses hardware surfaces (on the GPU); most filters want to act on software frames in CPU memory. Hence if you want to use any such filters then you need to deal with transferring between the two.
[11:56:08 CEST] <hojuruku> jkqxz, i realize that, but i think the problem was the software filters were called at the wrong order in the chain, like I said i'm no use, I haven't tried to RTFM up on this for 2 months. If you can get it working make a blog post or something because I spent a few days on it and gave up.
[11:57:02 CEST] <hojuruku> I wasn't trying to use the hardware services, there is an option to export from VAAPI back into ffmpeg. I was trying that before trying to use any filters of course.
[12:21:05 CEST] <kerio> yo is there a suggested ffmpeg binary build for raspberry
[12:37:10 CEST] <kerio> o no this ffmpeg doesn't have h264 omx D:
[13:24:07 CEST] <acresearch> people i have 2250 HD PNG images that I am trying to make into a video, this is the code i am using, does this result in HD quality video? i want it 1080p: ffmpeg -f image2 -i video%4d.png -r 30 -vcodec libx264 -pix_fmt yuv720p -acodec libvo_aacenc -ab 128k video.mp4
[13:24:42 CEST] <acresearch> sorry this is the command: ffmpeg -f image2 -i video%4d.png -r 30 -vcodec libx264 -pix_fmt yuv420p -acodec libvo_aacenc -ab 128k video.mp4
[13:55:56 CEST] <acresearch> people i have the following code to convert images into a video, my question is how can i make it work in a ios iphone? it doesn't seem to like to format or coding: ffmpeg -f image2 -i video%4d.png -r 30 -vcodec libx264 -pix_fmt yuv420p -acodec libvo_aacenc -ab 128k video.mp4
[13:57:58 CEST] <squ> acresearch: https://trac.ffmpeg.org/wiki/Encode/H.264#Compatibility
[13:58:52 CEST] <acresearch> squ: so i should include this flag? -profile:v high -level 4.2
[13:59:03 CEST] <squ> I don't know
[14:09:06 CEST] <acresearch> squ: doesn't seem to work, is there a format that just works everywhere? this issue with different formats for different platforms is a real pain in the butt !!! it wastes a lot of time
[14:09:22 CEST] <squ> doesn't it say which one
[14:09:50 CEST] <acresearch> squ: what do you mean?
[14:11:24 CEST] <squ> doesn't table by that link say which format is compatible with All devices
[14:13:02 CEST] <acresearch> yes, i get a very nice exported video, but the ios still stays it is incompatible and won't import it
[14:57:40 CEST] <sedri> Hello
[15:02:27 CEST] <sedri> I'm trying to use ffmpeg to split and merge several video files but I get an error message I don't know how to resolve: https://pastebin.com/raw/CGyvfX3i
[15:04:32 CEST] <DHE> "record or transcode stop time" is the description of the "-to" parameter. it isn't supported for an input file option
[15:05:16 CEST] <DHE> you'll have to use filter-based cutting, like [a]trim
[15:05:57 CEST] <kepstin> sedri: you can either use the -t input option or throw some trim filters in the filter chain (note that in both cases, you'll be providing duration of the clip, not end time)
[15:47:06 CEST] <sedri> Thank you kepstin.
[16:28:56 CEST] <wpppp> hi
[16:29:12 CEST] <spaam> Hello and welcome to #ffmpeg.
[16:29:14 CEST] <wpppp> GOV length in h264 - a higher value yields better compression - but it can affect quality?
[16:29:17 CEST] <wpppp> spaam: hi! :O
[16:30:17 CEST] <BtbN> do you mean gop?
[16:30:27 CEST] <wpppp> GOV length in ONVIF h264 settings
[16:30:33 CEST] <wpppp> maybe they have a typo in the caption?
[16:30:41 CEST] <BtbN> no idea what that is
[16:32:23 CEST] <wpppp> hm, thanks :D
[16:33:06 CEST] <wpppp> The GOP (Group of Pictures) refers to the type of setting in camera firmware by which it is possible to further reduce the bandwidth and storage consumption of video stream up to 10 times.
[16:33:09 CEST] <wpppp> (citation)
[16:33:22 CEST] <wpppp> http://www2.acti.com/support_old/Package/%7B6060C79F-2A5D-40A4-8837-16B835E3364.PDF
[16:33:26 CEST] <wpppp> for example ^
[16:35:06 CEST] <wpppp> BtbN: ^
[16:36:46 CEST] <alexpigment> Long GOP lengths affect seeking because decoding usually wants to start at the nearest I-frame. It can also affect hardware compatibility to some degree. In general though, if you don't need to seek in the video, long GOPs are fine.
[16:37:05 CEST] <alexpigment> (and they do decrease file size because you have less I-frames)
[16:38:22 CEST] <alexpigment> Personally, I *always* use 1-second GOPs, but I realize that this is not how most people work with H.264
[16:43:04 CEST] <wpppp> alexpigment: so GOP = GOV, right? :D
[16:43:09 CEST] <wpppp> I mean from the docs I linked to
[16:43:11 CEST] <wpppp> they mean the same?
[16:43:17 CEST] <kepstin> wpppp: the h264 format allows basically infinitely long GOP without any reduction in quality
[16:43:44 CEST] <wpppp> kepstin: cool, so just better compression? there is some point of diminishing returns?
[16:44:19 CEST] <alexpigment> wpppp: today is the first time i've ever heard of "GOV", so I'm not sure
[16:44:23 CEST] <kepstin> wpppp: I assume that there's diminishing returns eventually, yeah, but the main reason not to do it is the seeking issue.
[16:44:36 CEST] <wpppp> kepstin: so if someone wants to seek in the stream?
[16:44:42 CEST] <wpppp> in the already downloaded, buffered stream?
[16:45:09 CEST] <wpppp> if it is a livestream, streamed to youtube live - then the GOV/GOP thing doesn't really matter because youtube takes care for the seeking stuff?
[16:45:45 CEST] <kepstin> wpppp: for live streaming it's a bit of a different case, and depends on whether the remote side is just bouncing the stream or if they're re-encoding.
[16:46:01 CEST] <kepstin> wpppp: live streaming sites will usually give you a recommended gop size (keyframe interval)
[16:46:08 CEST] <wpppp> ah, good to know
[16:46:26 CEST] <kepstin> main issue there is people joining the stream late - they can't see any video until a new gop starts
[16:46:37 CEST] <wpppp> "YouTube's pipeline requires a Closed GOP for optimal transcoding."
[16:46:43 CEST] <wpppp> what does that mean?
[16:46:50 CEST] <wpppp> kepstin: hm, youtube can timeshift up to some hours
[16:47:14 CEST] <kepstin> wpppp: sure, but you'd still have the seeking problem then
[16:47:26 CEST] <kepstin> can't seek into the middle of a gop, can only decode from the start.
[16:47:58 CEST] <wpppp> kepstin: what gov/gop youtube live recommends?
[16:48:09 CEST] <kepstin> wpppp: google it? :)
[16:48:27 CEST] <wpppp> I did
[16:48:28 CEST] <wpppp> not sure
[16:49:14 CEST] <kepstin> wpppp: the difference between closed gop and open gop is that with open gop, you can have bidirectionally predicted frames at the end of one gop which reference stuff in the next
[16:49:29 CEST] <kepstin> which can cause issues with segmented media, like used in most http live streaming
[16:49:57 CEST] <wpppp> kepstin: ah!
[16:50:04 CEST] <wpppp> kepstin: I googled as you said and I found indeed something: https://datarhei.github.io/restreamer/wiki/iframe.html
[16:50:47 CEST] <kepstin> wpppp: regarding gop size (keyframe interval), see https://support.google.com/youtube/answer/3006768?hl=en
[16:50:53 CEST] <kepstin> they want one every 2 seconds.
[16:51:26 CEST] <kepstin> they're probably ok with anything in the range of 2-4s, really
[16:54:42 CEST] <wpppp> kepstin: so a gop/gov value of 2 to 4?
[16:54:55 CEST] <kepstin> no, a gop length of 2-4 seconds
[16:55:12 CEST] <wpppp> I understand
[16:55:13 CEST] <kepstin> so multiply that by your framerate to set the gop size/ keyframe interval
[16:55:17 CEST] <wpppp> ah!
[16:55:20 CEST] <wpppp> framerate, of course
[16:56:32 CEST] <wpppp> kepstin: frames/s * 2 to 4 s = frames (gop/gov value)? :)
[16:56:38 CEST] <wpppp> unit cancellation included :O
[17:09:49 CEST] <alexpigment> wpppp: if your stream is 30fps, a GOP value of 60 will be 2 seconds
[17:10:06 CEST] <alexpigment> if your stream is 25fps, a GOP value of 50 will be 2 seconds
[17:10:25 CEST] <alexpigment> if your stream is 30fps, a GOP value of 120 will be 4 seconds
[17:10:27 CEST] <alexpigment> etc
[17:10:33 CEST] <alexpigment> hopefully that clears up any confusion
[17:10:52 CEST] <wpppp> yes
[17:11:04 CEST] <wpppp> so google/youtube wants up to 4 seconds
[17:13:11 CEST] <wpppp> 2-4 and restreamer docs recommend 25 to max. 61 - so with smallest GOP I would scratch at that limit
[17:14:02 CEST] <alexpigment> I haven't read the documentation, but I can't imagine YouTube would have a problem with a GOP of 1 second
[17:14:07 CEST] <kepstin> looks like youtube prefers 2, since that is pretty much the min you can use for segmented streaming, andy longer and you get more latency.
[17:14:21 CEST] <kepstin> at least in their transcodes, I think they use 2.
[17:15:02 CEST] <alexpigment> Anyway, just set it to 50 or 60 (depending on your original framerate) and call it a day :)
[17:15:43 CEST] <wpppp> that's what I wanted to know :D
[17:15:46 CEST] <wpppp> thanks man
[17:16:42 CEST] <wpppp> alexpigment: ha, let's crack open another nice can of worms
[17:17:10 CEST] <wpppp> alexpigment: Hm, what is an Encoding interval?
[17:17:20 CEST] <wpppp> what is optimal? I guess it also has to do with GOV?
[17:17:34 CEST] <kepstin> wpppp: not standard encoder terminology, reference the docs for the product :/
[17:18:15 CEST] <wpppp> kepstin: like https://wiki.allprojects.info/display/ODMDOC/6.+Managing+Video#id-6.ManagingVideo-6.4VideoStreaming(optionally)
[17:19:11 CEST] <kepstin> wpppp: that seems to describe what it does pretty accurately?
[17:19:53 CEST] <wpppp> kepstin: right and I found out that the encoding interval has to be at least as large as GOV length
[17:19:58 CEST] <wpppp> which makes sense somehow I think
[17:19:59 CEST] <__deivid__> Hi! I have over 6k video files (mp4) and I want to detect if they have "faststart" enabled. What can I do?
[17:20:29 CEST] <__deivid__> I know *some* don't have it, but most do
[17:20:30 CEST] <kepstin> wpppp: that doesn't make sense... you want encoding interval to be 1 in almost all cases...
[17:21:22 CEST] <kepstin> wpppp: only reason to use anything other than 1 is if your cpu is overloaded by video encoding, so you want to skip frames. (e.g. using 2 halves the framerate, which also means you should halve the gop size)
[17:21:54 CEST] <wpppp> kepstin: ok, for whatever strange reason the UI sets the encoding interval from 1 back to GOP size :/
[17:22:22 CEST] <wpppp> aha, maybe it is something different
[17:22:25 CEST] <wpppp> a bug maybe, sec
[17:39:04 CEST] <BtbN> kepstin, there's no diminishing returns at all.
[17:39:16 CEST] <BtbN> It's pretty common to use endless gops in applications where seeking is not needed
[17:39:41 CEST] <BtbN> Like, for example the Wii U tablet uses a P/B Only stream, only sending I frames when required
[17:40:15 CEST] <kepstin> well, the longer the gop, the smaller the increase in bitrate adding each keyframe is. so the gains re relatively smaller for longer gops?
[17:40:43 CEST] <kepstin> diminishing doesn't mean there aren't still returns of course :)
[17:49:53 CEST] <BtbN> kepstin, depending on the connection, the burst caused by an IFrame may still be a significant problem
[18:12:31 CEST] <Hopper_> Hey all, anyone available to help me troubleshoot my FFmpeg command?
[18:12:36 CEST] <Hopper_> https://pastebin.com/uSibwXrN
[18:13:15 CEST] <DHE> so what's wrong with it?
[18:13:22 CEST] <Hopper_> DHE: Glad you're here!
[18:13:27 CEST] <DHE> oh god
[18:14:07 CEST] <Hopper_> When I run that command I get a bunch of brown "Past duration 0.0numbers too large" responses.
[18:14:24 CEST] <Hopper_> Oh god?
[18:14:30 CEST] <Hopper_> Is it that bad?
[18:17:14 CEST] <DHE> no, I'm just busy and you're pouncing on my specifically
[18:17:26 CEST] <durandal_1707> depends
[18:17:57 CEST] <Hopper_> DHE: I certainly didn't mean to single you out, but I appreciate your help previously.
[18:18:16 CEST] <Hopper_> durandal_1707: What do you mean?
[18:21:01 CEST] <thebombzen> Hopper_: "past duration too large" is usually not a problem
[18:21:41 CEST] <Hopper_> Okay, but when I attempt to open the file my players don't display anything.
[18:21:59 CEST] <thebombzen> did you try playing it in ffplay or mpv as a test?
[18:22:45 CEST] <Hopper_> mpv
[18:23:00 CEST] Last message repeated 1 time(s).
[18:23:08 CEST] <Hopper_> Hah, sorry wrong keyboard.
[18:23:17 CEST] <thebombzen> what "past duration too large" basically means in this case, is well, ffmpeg receives variable framerate content from the filtergraph and doesn't know the framerate so it tries to guess based on the timestamps
[18:23:25 CEST] <thebombzen> and if it's slightly off you get this error
[18:24:05 CEST] <thebombzen> In this case, you know what the framerate is. It's 30. Try adding -r 30 after the filtergraph to force it to be constant framerate 30 fps
[18:24:32 CEST] <Hopper_> Okay, I'll still get a feed, it is just likely to have bad timing.
[18:24:37 CEST] <Hopper_> Okay, I'll try that.
[18:24:52 CEST] <thebombzen> -r 30 will force 30 fps constant framerate and duplicate/drop frames to achieve it... which should not be a problem for you seeing as it's already 30 fps
[18:25:00 CEST] <thebombzen> but what it does do is set the mpegts container to 30 fps
[18:26:05 CEST] <thebombzen> Hopper_: also, if you're trying to put two videos side-by-side, use the hstack filter. It's better than pad/overlay
[18:26:13 CEST] <Hopper_> thebombzen: which component is the filtergraph?
[18:26:23 CEST] <Hopper_> Okay, I'll look into that as well!
[18:26:25 CEST] <thebombzen> the filtergraph is your -filter_complex
[18:26:42 CEST] <thebombzen> and yea, hstack does what you need without messing with huge canvasses or transparency or whatever
[18:26:55 CEST] <thebombzen> it's much faster but less powerful because it only stacks videos horizontally
[18:27:02 CEST] <thebombzen> (for vertically, you can use vstack accordingly)
[18:28:15 CEST] <Hopper_> Hm, mpv seems unresponsive in my version of debian.
[18:29:04 CEST] <wpppp> hi again
[18:29:21 CEST] <wpppp> in youtube in "nerds" statistics I see 30fps, is that really 30fps? or will youtube transform all streams to 30fps?
[18:29:22 CEST] <wpppp> even 25fps?
[18:29:49 CEST] <furq> it's really 30fps
[18:30:51 CEST] <wpppp> ah cool
[18:30:53 CEST] <wpppp> why? :)
[18:31:16 CEST] <furq> i don't know how to answer that
[18:32:57 CEST] <Hopper_> thebombzen: hstack is slick, but again produces a file that wont play on VLC or MPV.
[18:33:36 CEST] <thebombzen> Hopper_: what does "won't play" mean
[18:33:59 CEST] <Hopper_> VLC tries, mpv opens and immediately closes.
[18:34:13 CEST] <thebombzen> what does mpv say on the command line?
[18:34:17 CEST] <furq> i don't think mjpeg is supported in mpegts
[18:34:43 CEST] <Hopper_> furq: It works when I was using a single video feed.
[18:34:56 CEST] <thebombzen> wpppp: youtube doesn't transform streams to 30 fps, so if you see a 30 fps video, it means whoever uploaded it uploaded a 30 fps video
[18:34:59 CEST] <thebombzen> there isn't more to it
[18:35:14 CEST] <furq> thebombzen: i'm not entirely sure of that
[18:35:35 CEST] <thebombzen> (I know that. If you upload a video with a weirdass framerate between 30 and 60 it might turn it to 30)
[18:35:37 CEST] <furq> obviously it does support 24/25/30/48/50/60 amongst others
[18:35:47 CEST] <furq> but i assume like 28 or something would get converted to 30
[18:35:53 CEST] <furq> that's the uploader's fault for having a dumbass video though
[18:35:54 CEST] <thebombzen> 28 goes to 25 I think
[18:36:15 CEST] <furq> i know anything below 6 gets converted to 6
[18:36:18 CEST] <wpppp> apparently my ipcam only supports 24/25fps although they advertised 30fps for full hd resolution :/
[18:36:28 CEST] <wpppp> anyhow, I don't think there is a big difference between 25 and 30fps
[18:36:33 CEST] <furq> 25 is fine
[18:36:36 CEST] <wpppp> when it would be 24 vs 60fps, then I would understand it
[18:36:36 CEST] <wpppp> right
[18:36:39 CEST] <furq> as anyone in a pal region knows
[18:36:39 CEST] <wpppp> but still :/
[18:36:47 CEST] <wpppp> hmm, isn't NTSC a bit higher? and why?
[18:36:52 CEST] <furq> ntsc is 30
[18:37:01 CEST] <wpppp> that's explains it :D
[18:37:05 CEST] <wpppp> my ipcam uses pal right now
[18:37:07 CEST] <thebombzen> ntsc is 30 if interlaced, but ntsc films are still 24 in progressive
[18:37:12 CEST] <wpppp> I see
[18:37:15 CEST] <furq> because pal regions have 50hz power and ntsc regions have 60
[18:37:18 CEST] <wpppp> ah
[18:37:21 CEST] <wpppp> crazy analog stuff
[18:37:27 CEST] <thebombzen> but they still use 24 for films in North America
[18:37:30 CEST] <furq> pal is traditionally higher vertical resolution to make up for it
[18:37:31 CEST] <wpppp> hm
[18:37:41 CEST] <furq> but not really since everything switched to HD
[18:37:56 CEST] <wpppp> why do all these IPcams still offer the PAL/NTSC option? even if they got full hd rtsp streaming and onvif and all the other digital stuff?
[18:38:04 CEST] <thebombzen> but 30 is not super relevant anymore
[18:38:08 CEST] <thebombzen> outside of interlaced content
[18:38:18 CEST] <thebombzen> hd content is usually 24 fps or 60 fps most of the time
[18:38:24 CEST] <wpppp> thanks! :D
[18:38:26 CEST] <wpppp> this explains
[18:38:39 CEST] <thebombzen> (in North America I mean. I have no idea what is standard in Europe)
[18:38:57 CEST] <furq> 50i or 25p is standard in europe for hd and sd
[18:39:10 CEST] <wpppp> I hope the last left overs of analog PAL/NTSC interleaving, power frequency crap will be removed soon :D
[18:39:16 CEST] <furq> good luck with that
[18:39:26 CEST] <thebombzen> power frequency will always be relevant
[18:39:35 CEST] <wpppp> even in digital world?
[18:39:41 CEST] <furq> we're still using yuv, which is for backward compatibility with black and white tvs
[18:39:42 CEST] <thebombzen> but also keep in mind that it already is being phased out
[18:39:57 CEST] <furq> and by "we" i mean basically all video on the internet
[18:39:58 CEST] <thebombzen> furq: YUV also is more compressible than RGB, so there's still a reason to use it
[18:40:01 CEST] <furq> sure
[18:40:09 CEST] <thebombzen> chroma subsampling OP
[18:40:10 CEST] <wpppp> and thanks for the hint! I switched my ipcam to NTSC mode and now I can select 30fps - the question is: Are these real 30 Full HD fps?
[18:40:21 CEST] <thebombzen> how are we supposed to answer that
[18:40:25 CEST] <thebombzen> you're the one with the camera
[18:40:30 CEST] <alexpigment> chroma subsampling is pretty dumb :( not sure why we still do it
[18:40:38 CEST] <furq> because it's half the bandwidth
[18:40:39 CEST] <thebombzen> because it saves 50% of the bitrate instantly
[18:40:46 CEST] <thebombzen> for no visually noticable difference
[18:40:48 CEST] <furq> except it isn't really with modern codecs
[18:40:53 CEST] <alexpigment> yeah, but it's also throwing away half the color information
[18:40:55 CEST] <furq> it's still an appreciable saving though
[18:41:00 CEST] <thebombzen> so? humans suck ass at seeing color
[18:41:03 CEST] <thebombzen> that's the whole point
[18:41:11 CEST] <alexpigment> try using chroma key on anything that's subsampled
[18:41:25 CEST] <thebombzen> sure, it's less information for editing
[18:41:35 CEST] <thebombzen> but it makes a lot of sense outside of intermediary storage
[18:41:35 CEST] <furq> i don't think your tv station is really that bothered about your ability to chromakey
[18:42:01 CEST] <thebombzen> the whole point is that it's a human-specific visual optimization. Most data like that that gets thrown out is intentionally thrown out for realtime playback
[18:42:15 CEST] <furq> itunes continue to ignore my requests for 24/96 flac so that i can remaster their music in my shed
[18:42:18 CEST] <furq> the bastards
[18:42:26 CEST] <alexpigment> yeah, i just think it's a dumb way to think about information. just throw away half. we're MP3-izing broadcast and hard media
[18:42:30 CEST] <furq> thanks to them i don't even have a shed
[18:42:48 CEST] <thebombzen> alexpigment: well once you deal with lossless video you appreciate lossy
[18:43:10 CEST] <nicolas17> alexpigment: do you expect them to stream lossless RGB video to you?
[18:43:12 CEST] <alexpigment> thebombzen: doing 4:4:4 lossy is still by no means anywhere near lossless
[18:43:20 CEST] <alexpigment> nicolas17: see above
[18:43:34 CEST] <thebombzen> alexpigment: well you *can* use 4:4:4 for intermediary editing.
[18:43:40 CEST] <alexpigment> of course I can
[18:43:45 CEST] <nicolas17> chroma subsampling is just yet another lossy compression technique adapted to human eyes
[18:43:47 CEST] <thebombzen> then why are you complaining
[18:43:52 CEST] <alexpigment> i'm saying we shouldn't be using 4:2:0 as a broadcast and physical media standard
[18:43:57 CEST] <furq> how would they stream all 900 high-quality tv channels if they weren't saving so much bandwidth
[18:44:07 CEST] <thebombzen> alexpigment: why not? You can't see the difference if you're playing in realtime.
[18:44:08 CEST] <alexpigment> aight, you guys are just missing the point
[18:44:18 CEST] <alexpigment> why do we even use PCM on DVD?
[18:44:18 CEST] <furq> i will gladly take fucked video quality on shows i like so that SUPERCASINO +1 can continue to exist
[18:44:22 CEST] <alexpigment> or PCM on CD?
[18:44:27 CEST] <thebombzen> alexpigment: we don't use pcm on a dvd
[18:44:33 CEST] <furq> you can
[18:44:38 CEST] <alexpigment> thebombzen: ???????
[18:44:43 CEST] <alexpigment> thebombzen: ????????????
[18:44:44 CEST] <thebombzen> we can, but we don't.
[18:44:48 CEST] <alexpigment> who is we?
[18:44:49 CEST] <furq> i've seen it lots of times
[18:44:52 CEST] <alexpigment> of course they use it
[18:44:58 CEST] <alexpigment> i've got hundreds of dvds with PCM
[18:44:59 CEST] <furq> most DVDAs use lpcm
[18:45:08 CEST] <thebombzen> why do we use pcm on a CD? idk it was written in the 1970s or whatever
[18:45:11 CEST] <furq> and concert videos etc
[18:45:20 CEST] <alexpigment> yeah, i have tons of music dvds
[18:45:24 CEST] <nicolas17> alexpigment: what else could CDs have possibly used?
[18:45:25 CEST] <thebombzen> go ask the people who originally wrote CD-audio
[18:45:28 CEST] <furq> also yeah if CDs were invented today there's no way they'd use PCM
[18:45:32 CEST] <thebombzen> you know, before mp3s existed
[18:45:45 CEST] <thebombzen> before flac existed
[18:45:55 CEST] <alexpigment> the point being is that we already had lossless standards. why are we settling for this subsampled BS in 2017, when we have more storage and bandwidth than ever?
[18:45:58 CEST] <nicolas17> before either mp3 or flac was computationally feasible
[18:46:00 CEST] <furq> they'd use AAC or some similarly patent-encumbered bullshit so that they could fill the rest of the disc with ads for dick pills
[18:46:09 CEST] <nicolas17> alexpigment: we have more computing power than ever :P
[18:46:11 CEST] <thebombzen> alexpigment: because lossless audio and lossless video are very different
[18:46:16 CEST] <thebombzen> lossless audio isn't impractically large
[18:46:22 CEST] <furq> alexpigment: inertia
[18:46:22 CEST] <alexpigment> sure
[18:46:27 CEST] <furq> everything already supports yuv420p
[18:46:27 CEST] <thebombzen> video is orders of magnitude larger than audio
[18:46:29 CEST] <alexpigment> and i'm not talking about lossless video here
[18:46:33 CEST] <alexpigment> just 4:4:4
[18:46:38 CEST] <furq> and so everyone only broadcasts yuv420p
[18:46:39 CEST] <alexpigment> this is not a huge increase in size
[18:46:42 CEST] <alexpigment> 2x at most
[18:46:46 CEST] <thebombzen> that is enormous
[18:46:51 CEST] <alexpigment> compared to what?
[18:46:55 CEST] <alexpigment> it's small stuff
[18:46:57 CEST] <thebombzen> a 100% increase in size is enormous
[18:47:04 CEST] <furq> if you started broadcasting 444 then nobody would notice except the vast numbers of people whose shit tvs can't decode it
[18:47:07 CEST] <nicolas17> 2x is "small stuff"?
[18:47:13 CEST] <alexpigment> yeah, 2x is small
[18:47:17 CEST] <furq> it wouldn't be 2x
[18:47:18 CEST] <thebombzen> also, alexpigment, you're overestimating the visual difference of 4:2:0
[18:47:21 CEST] <nicolas17> it's by definition twice as big :P
[18:47:21 CEST] <thebombzen> it's pretty neglibible
[18:47:25 CEST] <furq> even mpeg2video would compress it much more than that
[18:47:42 CEST] <alexpigment> thebombzen: not at all. i realize it's a very small difference. it's just a thing we shouldn't be doing
[18:47:48 CEST] <ritsuka> alexpigment: are you a friend of the guy on doom9.org that derails every thread in a 4:4:4 versus 4:2:0 flame? ;)
[18:48:01 CEST] <thebombzen> alexpigment: on principle? because it's not noticable to humans
[18:48:03 CEST] <alexpigment> no, i'm not a doom9 kinda person
[18:48:06 CEST] <thebombzen> it's really not noticable
[18:48:10 CEST] <thebombzen> it's a very good optimization
[18:48:12 CEST] <furq> it is noticeable sometimes
[18:48:20 CEST] <furq> i agree it's not worth it in general though
[18:48:25 CEST] <thebombzen> furq: yea, I know that, it's noticable rarely
[18:48:30 CEST] <furq> even assuming every decoder magically supported 444 tomorrow
[18:48:41 CEST] <thebombzen> but in cases where it matters (like a screengrab) you can use it
[18:48:47 CEST] <alexpigment> thebombzen: yes, on principal mostly. because you can't do chroma key on 4:4:4 and yet that's the current video standard is kinda dumb
[18:49:02 CEST] <nicolas17> the people producing 4:2:0 don't care if you can't do chroma key
[18:49:04 CEST] <furq> you can't practically use it because any consumer-friendly endpoint you'd send the video to would convert it to 420
[18:49:15 CEST] <thebombzen> alexpigment: you're complaining about being unable to do an intermediary video editing thing on end-user video
[18:49:19 CEST] <wpppp> constant bitrate, variable bitrate and finally "fixed quality" with "very high, low, etc" subjective selection
[18:49:20 CEST] <wpppp> what is better? :)
[18:49:26 CEST] <nicolas17> furq: most consumer cameras would save 4:2:0 too
[18:49:42 CEST] <furq> you don't do screen capture with a camera
[18:49:51 CEST] <alexpigment> thebombzen: i'm saying that we shouldn't be standardizing a format that's functionally ill-equiped for editing
[18:49:55 CEST] <thebombzen> alexpigment: you say you can't do chroma key and then you're complaining that your end-user video can't do it. pick one - end-user video? or intermediary editing?
[18:50:08 CEST] <thebombzen> it's only standard for consumers
[18:50:11 CEST] <alexpigment> end user video should be, if anything, a bitrate change
[18:50:12 CEST] <alexpigment> that's all
[18:50:15 CEST] <thebombzen> why?
[18:50:28 CEST] <alexpigment> because i don't think it's right to make gimpy standards
[18:50:33 CEST] <alexpigment> it's a shortsighted notion
[18:50:39 CEST] <thebombzen> no, no it's not
[18:50:39 CEST] <nicolas17> that's why 4:4:4 h264 is a standard too
[18:50:42 CEST] <alexpigment> yes it is
[18:50:43 CEST] <furq> this isn't a recent thing though
[18:50:47 CEST] <alexpigment> it is extremely short-sighted
[18:50:47 CEST] <furq> these standards date back to the 70s
[18:50:55 CEST] <furq> there's just no impetus to change them
[18:51:20 CEST] <alexpigment> man, i feel like i'm talking to people who work at YouTube or something
[18:51:21 CEST] <thebombzen> alexpigment: you keep complaining that end-user tools are different than intermediary industry codecs
[18:51:34 CEST] <furq> you'd need everyone to agree that it would be good to move to gbrp or xyz12le or whatever
[18:51:37 CEST] <furq> and that's not going to happen
[18:51:52 CEST] <thebombzen> this is not a fair complaint, there's a reason consumer tools and industry tools are different.
[18:51:54 CEST] <nicolas17> why would broadcast video care about your ability to chroma-key? it's illegal for you to edit it that way anyway :P
[18:52:43 CEST] <alexpigment> nicolas17: because your camera follows the dumb standards. and you can't do chroma key on your phone videos, for example, because people like you guys are justifying these dumb standards
[18:52:49 CEST] <wpppp> NTSC vs PAL in a _IP_camera - why at all?
[18:52:50 CEST] <alexpigment> for a cost of what? 2x bitraet
[18:52:55 CEST] <alexpigment> *bitrate
[18:53:02 CEST] <furq> again, it's much less than 2x bitrate
[18:53:05 CEST] <furq> unless you're streaming rawvideo
[18:53:15 CEST] <alexpigment> talking max here
[18:53:17 CEST] <furq> it's a few percent with h264
[18:53:40 CEST] <nicolas17> alexpigment: I have 260GB of iframe-only 4:2:0 video, that means 30 days vs 15 days for me to upload it to the internet
[18:53:51 CEST] <nicolas17> if it was 2x as you say
[18:54:10 CEST] <alexpigment> nicolas17: doesn't it feel dumb to have i-frame video that's always throwing away half the color information?
[18:54:11 CEST] <nicolas17> also means the memory card would have lasted half as long while recording the timelapse
[18:54:14 CEST] <alexpigment> like why do iframe video at that point
[18:54:15 CEST] <thebombzen> alexpigment: are you seriously complaining that your phone camera doesn't use industry-quality recording
[18:54:18 CEST] <alexpigment> you already have a gimpy video
[18:54:34 CEST] <nicolas17> alexpigment: for ease of editing, also that's how this camera did timelapses
[18:54:43 CEST] <alexpigment> thebombzen: why the hell are you justifying this divide between industry and consumer standards? do you *want* to have to pay a premium?
[18:54:54 CEST] <thebombzen> alexpigment: because there's different tools for differnet purposes
[18:54:54 CEST] <nicolas17> alexpigment: why are you trying to convince *us* about it?
[18:55:15 CEST] <furq> because it's fun to have a moan on irc
[18:55:17 CEST] <alexpigment> i'm not sure why any of you are even arguing with this. do any of you work in the tv/film industry at all??
[18:55:30 CEST] <thebombzen> alexpigment: I don't use ffv1 on my computer because I don't like paying for external hard drives
[18:55:32 CEST] <alexpigment> or do you guys just watch youtube all day?
[18:55:33 CEST] <nicolas17> alexpigment: no, so why are you trying to convince us that things should be different?
[18:55:44 CEST] <thebombzen> alexpigment: you just barge in and tell us that we're all wrong
[18:55:53 CEST] <alexpigment> you guys are the only ones arguing. all i said was that chroma subsampling is dumb. that's how this started
[18:55:59 CEST] <alexpigment> i didn't think i needed to defend that statement
[18:56:02 CEST] <nicolas17> well it's not dumb, /thread :P
[18:56:09 CEST] <alexpigment> ok, it's settled then
[18:56:44 CEST] <furq> it is a bit dumb
[18:57:05 CEST] <Hopper_> Okay, now I'll jump in again.
[18:57:40 CEST] <Hopper_> For the hstack file I generated, ffplay fails and says "Failed to open file or configure filtergraph"
[19:00:07 CEST] <Hopper_> thebombzen: Any ideas?
[19:00:44 CEST] <thebombzen> Hopper_: furq said mpegts doesn't support mjpeg (or rather, might not)
[19:01:23 CEST] <thebombzen> also
[19:02:07 CEST] <Hopper_> And I said that mpegts works fine when I'm not using filter_complex
[19:02:18 CEST] <thebombzen> okay, but still post the complete command and output
[19:02:45 CEST] <Hopper_> Sorry, for creating or playback?
[19:04:24 CEST] <kepstin> Hopper_: both if you can, the creating ffmpeg command at a minimum
[19:06:28 CEST] <Hopper_> Creation- https://pastebin.com/3KMf9mC1
[19:06:46 CEST] <Hopper_> Playback- https://pastebin.com/RrqSFpTp
[19:07:49 CEST] <Hopper_> I assume it's an issue with different codecs.
[19:11:46 CEST] <kepstin> Hopper_: those are both the playback logs, you're missing the creation log
[19:12:03 CEST] <kepstin> but yeah, looks like it's an unsupported codec in mpegts, so the player can't figure out how to play it
[19:12:26 CEST] <Hopper_> Embarrassing. I'll put creation up now.
[19:12:30 CEST] <Hopper_> pastebin
[19:14:21 CEST] <Hopper_> creation- https://pastebin.com/Teg5ydJ9
[19:14:48 CEST] <Hopper_> I wonder why the ts worked when it was just one video stream but fails with two.
[19:15:43 CEST] <alexpigment> for what it's worth - this is anecdotal really - the mpegts muxer fails in a lot of stupid ways in my experience. but it's usually just that the audio doesn't get included in my tests
[19:16:04 CEST] <kepstin> dunno, what was your full command and output with one video stream?
[19:16:27 CEST] <kepstin> my guess is that you it was just using a different video codec...
[19:17:54 CEST] <Hopper_> Sorry for being so slow on the pastebins, the comp I'm using for the encoding isn't networked.
[19:19:16 CEST] <Hopper_> Single source- https://pastebin.com/r3ZCvQpM
[19:19:31 CEST] <Hopper_> Nope, messed that up again, one sec.
[19:21:00 CEST] <kepstin> Hopper_: you didn't include the -vcodec (or -c:v) option on that one, so it defaulted to mpeg2, which is a codec which works fine in mpegts
[19:21:14 CEST] <kepstin> the -vcodec output option*
[19:21:46 CEST] <kepstin> so yeah, the reason the one with the filter didn't work is because you also changed the output codec from mpeg2 to mjpeg.
[19:22:48 CEST] <Hopper_> It works if I let it use the default codec, it just has a TERRIBLE framerate.
[19:24:41 CEST] <kepstin> the framerate issues are unrelated to the selected codec - you see all those 'past duration too large' messages? they indicate a framerate problem in the filter graph.
[19:25:02 CEST] <Hopper_> So it's an issue with the output of my camera?
[19:25:32 CEST] <kepstin> Hopper_: it looks like the cameras are giving you 60fps even though you asked for 30
[19:25:44 CEST] <kepstin> try switching the -framerate option to 60 on both, and see if that helps
[19:26:09 CEST] <Hopper_> And the output?
[19:26:34 CEST] <kepstin> the output will be 60fps then.. if you want 30, you can add an fps filter
[19:27:10 CEST] <wpppp> what is so bad about 60fps? :P
[19:27:15 CEST] <wpppp> it is awesome
[19:27:18 CEST] <Hopper_> Output started at 15 and is dropping.
[19:27:18 CEST] <wpppp> so why less?
[19:29:16 CEST] <nicolas17> uh
[19:29:33 CEST] <nicolas17> ...are you talking about the fps= in the progress line while ffmpeg runs?
[19:30:12 CEST] <Hopper_> Yes, but it is obviously not playing back at 30fps.
[19:31:06 CEST] <nicolas17> "frame= 605 fps=16 q=-0.0 Lsize=..." that thing?
[19:31:32 CEST] <nicolas17> that 'fps' says how many frames per second ffmpeg is processing, it has nothing to do with the output you will get
[19:32:01 CEST] <nicolas17> it lets you know how fast the encoding process is doing
[19:32:42 CEST] <Hopper_> I understand, and VLC is playing the file back at 9fps.
[19:41:51 CEST] <Hopper_> It looks like my cameras only put out 10fps at 720 when I'm using yuv420p, instead of mjpeg
[19:42:23 CEST] <furq> are they usb
[19:42:31 CEST] <Hopper_> Yup
[19:42:35 CEST] <furq> that's about right then
[19:43:14 CEST] <furq> usb2 can't really handle 720p30 rawvideo
[19:43:35 CEST] <Hopper_> So would I have to use mjpeg to create a different file type, then convert that file into a .ts?
[19:43:53 CEST] <furq> well mpegts can't contain rawvideo either
[19:43:57 CEST] <furq> so you'd have to convert it either way
[19:44:21 CEST] <furq> ideally you'd use the uncompressed source, but if you're rate limited by usb2, it's probably better to start with mjpeg
[19:44:40 CEST] <furq> do you have any requirements for the output other than it being mpegts
[19:44:55 CEST] <wpppp> over 802.11, bandwidth is higher than 8mbit/s required
[19:44:58 CEST] <wpppp> hm, right
[19:45:03 CEST] <wpppp> ohhhh wrong user
[19:45:54 CEST] <Hopper_> I don't know about my output requirements fully, I'm using a transcoder to push my stream over the air at 1.2GHz.
[19:46:29 CEST] <furq> what's receiving it
[19:46:56 CEST] <Hopper_> A computer with receiver.
[19:47:08 CEST] <furq> you can presumably use any codec then
[19:47:14 CEST] <furq> in which case you probably want to convert it with x264
[19:48:09 CEST] <Hopper_> I'm using a UT-100 if anyone is familiar with that.
[19:48:27 CEST] <Hopper_> http://www.hides.com.tw/product_cg74469_eng.html
[19:49:10 CEST] <furq> if it's a dvb-t transmitter then h.264 should be supported
[19:49:24 CEST] <furq> add -c:v libx264 as an output option
[19:49:31 CEST] <furq> it should be faster and look much better than mpeg2video
[19:50:21 CEST] <Hopper_> So use -vcodec mjpeg for input and -c:v libx264 for output?
[19:50:25 CEST] <furq> yeah
[19:50:33 CEST] <BtbN> Isn't dvb-t mpeg2?
[19:50:44 CEST] <furq> it can be either
[19:51:00 CEST] <Hopper_> So what should my output file type be?
[19:51:02 CEST] <BtbN> iirc DVB-T2 added h264 and hevc
[19:51:04 CEST] <furq> mpegts
[19:51:17 CEST] <kepstin> Hopper_: you're using v4l2 grabbing? The input option you should be using is "-input_format mjpeg", I think
[19:51:39 CEST] <furq> i'm pretty sure regular dvb-t is mpeg2video/h.264 and dvb-t2 is h.264/h.265
[19:51:41 CEST] <furq> i'm not an expert though
[19:51:56 CEST] <kepstin> if -vcodec works as an input option there, then it must be a weird legacy thing :/
[19:51:58 CEST] <JEEB> you can look at EBU specs or so
[19:52:04 CEST] <furq> most countries broadcast m2v over dvb-t but apparently some countries broadcast h.264 over it
[19:52:08 CEST] <JEEB> kepstin: it still works I think, it's just not recommended for usage
[19:52:17 CEST] <furq> it's probably faster to just see if it works
[19:52:23 CEST] <furq> if it doesn't then use -c:v mpeg2video as an output option
[19:52:28 CEST] <BtbN> setting the input codec seems like a normal thing to me?
[19:52:36 CEST] <BtbN> Like, for example selecting the cuvid decoders
[19:52:43 CEST] <furq> yeah that makes more sense to me than input_format
[19:53:07 CEST] <furq> i guess it often means "interpret what you get as" rather than "tell the device to send"
[19:53:32 CEST] <JEEB> very technically DVB-T is just the way the signal is broadcast. after that it's just data, and generally if you pass MPEG-TS through it if something supports AVC over MPEG-TS in DVB-T2 will also most likely take AVC over MPEG-TS from DVB-T stuff
[19:53:40 CEST] <kepstin> BtbN: the input codec option selects which *decoder* to use, yes, that makes sense. But here it's overloaded to mean "have the camera provide a format that's compatible with the decoder I've selected manually", which is strange
[19:53:52 CEST] <furq> JEEB: yeah i'm thinking about what the receiver will accept
[19:53:57 CEST] <furq> rather than what the transmitter will send
[19:54:07 CEST] <furq> i assume the transmitter will send any mpegts stream
[19:54:22 CEST] <BtbN> kepstin, maybe it does still do that, and the ffmpeg.c auto graph building then selects the right format from the supported inputs of the decoder?
[19:59:06 CEST] <Hopper_> Newest Result- https://pastebin.com/RswnwYGd
[19:59:31 CEST] <Hopper_> It's listing the stream output as being 9fps.
[20:02:20 CEST] <kepstin> Hopper_: you didn't set the input codec on one of the cameras, and you set the output codecs twice
[20:02:44 CEST] <kepstin> Hopper_: input options go *before* the inputs
[20:02:52 CEST] <kepstin> before the -i specificall
[20:03:11 CEST] <Hopper_> Oh, I do that before -i?
[20:03:33 CEST] <nicolas17> yes
[20:03:43 CEST] <Hopper_> Thanks, didn't understand that element of the syntax.
[20:03:54 CEST] <nicolas17> ffmpeg <input options> -i input.file <output options> output.file
[20:03:58 CEST] <nicolas17> or rather
[20:04:18 CEST] <kepstin> Hopper_: the command i'd expect you to use is `ffmpeg -f v4l2 -input_format mjpeg -framerate 60 -video_size 1280x720 -i /dev/video0 -input_format mjpeg -video_size 1280x720 -framerate 60 -i /dev/video1 -filter_complex hstack -c:v libx264 outputsplit.ts`
[20:04:19 CEST] <nicolas17> ffmpeg <options for input 1> -i input1.file <options for input 2> -i input2.file ... <output options> output.file
[20:04:44 CEST] <Hopper_> Thanks!
[20:04:59 CEST] <kepstin> Hopper_: you might have to add an `-pix_fmt yuv420p` output option as well, since your cameras are giving you yuv422
[20:05:10 CEST] <kepstin> depends on the dvb specs whether 4:2:2 is allowed I guess
[20:05:25 CEST] <nicolas17> why does conversion from yuvj422p give this warning from swscaler? "deprecated pixel format used, make sure you did set range correctly"
[20:05:47 CEST] <kepstin> nicolas17: internal deprecated stuff, don't worry about it unless you're developing ffmpeg
[20:06:52 CEST] <Hopper_> So Mjpeg must require more cpu power.
[20:07:15 CEST] <Hopper_> My little lattepanda is pegged at 100% and it looks far worse than the default codec.
[20:07:37 CEST] <kepstin> Hopper_: when using mjpeg you're getting more frames to encode (higher framerate), so yes it needs more cpu power
[20:08:09 CEST] <kepstin> 'lattepanda'? oh, gah, are more people doing video encoding on underpowered systems? :(
[20:08:37 CEST] <kepstin> you've only got an atom in there :(
[20:08:38 CEST] <Hopper_> Yes, but this system will be airborne so I don't have a choice to send up a nice powerful system.
[20:09:08 CEST] <kepstin> Hopper_: well, to start, add "-preset veryfast" output option to make the x264 encoder use less cpu
[20:09:43 CEST] <kepstin> and as far as quality, well, you have to pick either a bitrate or crf value that's appropriate for your use case; the default is just a sort of "medium quality"
[20:10:27 CEST] <nicolas17> how do I achieve a target file size, without caring about bitrate at any particular point in the stream?
[20:10:57 CEST] <nicolas17> does that need two-pass?
[20:11:00 CEST] <nicolas17> AIUI there are ways to limit max bitrate because some devices/networks can't load the input any faster, but that's not what I need
[20:11:15 CEST] <kepstin> nicolas17: do a 2pass encode with bitrate set, yes
[20:11:35 CEST] <Hopper_> kepstin: crf value?
[20:14:39 CEST] <kepstin> Hopper_: what are you doing with the media stream that you're encodign? sending over network, storing on local hd, ...?
[20:15:20 CEST] <Hopper_> Broadcasting OTA.
[20:15:48 CEST] <kepstin> ok, directly from that box, not being transferred somewhere else first?
[20:16:16 CEST] <kepstin> if you don't want to re-encode again, then you need to look up what the broadcast specs for bitrate are, and then encode to those specs.
[20:16:16 CEST] <nicolas17> wait
[20:16:29 CEST] <nicolas17> you're encoding stuff, and sending it over the air to a receiver, and what does the receiver do?
[20:16:38 CEST] <Hopper_> Correct, but my hope is to save and transmit simultaneously.
[20:17:05 CEST] <Hopper_> nicolas17: Saves the file locally and streams it online.
[20:17:22 CEST] <kepstin> Hopper_: assuming you're transmitting with DVB, you'll want to set an appropriate bitrate and also do some vbv controls, since it's a fixed bitrate channel...
[20:17:35 CEST] <nicolas17> so either your receiver system will have to decode and re-encode to the needs of the online stream
[20:18:02 CEST] <nicolas17> or you should make your original sender encode in an online-stream-ready way; but it might not have enough CPU power for that
[20:18:05 CEST] <Hopper_> Ya, I know about the bitrate, I'm just trying to sort out the video to make sure it's even possible before nit picking.
[20:18:54 CEST] <kepstin> presumably the receiver system on the ground can actually be a reasonably powerful pc, so re-encoding there should be fine if needed.
[20:19:03 CEST] <Hopper_> My receiver will have as much power as I need, and I need to keep the transmission lean, since it's not powerful.
[20:19:14 CEST] <Hopper_> kepstin: Exactly.
[20:20:46 CEST] <kepstin> so yeah, find out how many bits you can get through your transmission channel, then use that to configure the encoder bitrate and vbv settings.
[20:22:06 CEST] <thebombzen> Hopper_: do you have to use mpegts?
[20:22:09 CEST] <leif> Has anyone else noticed a huge memory spike when they use the `trim` and `atrim` filters? (In the libavfilter library)?
[20:23:24 CEST] <Hopper_> thebombzen: That's the firmware for the transmitter requires.
[20:23:25 CEST] <leif> Like, when I use it from the command line app, everything works fine. But when I take the same filtergraph to a program using libav, I get a huge memory leak. If I replace trim with, say, copy, the memory leak goes away.
[20:23:37 CEST] <thebombzen> Are you re-encoding later?
[20:24:08 CEST] <thebombzen> I ask because you could always encode to H.264 in realtime and put that inside mpegts
[20:24:13 CEST] <Hopper_> kepstin: 9.952Mbps max data rate on the .ts file.
[20:24:32 CEST] <Hopper_> thebombzen: Will I really have the CPU power for that?
[20:24:45 CEST] <thebombzen> I cannot answer that question without knowing what CPU model you have
[20:24:57 CEST] <DHE> also there are options to reduce CPU power required (with a quality penalty)
[20:25:16 CEST] <Hopper_> The preset verfast help tremendously.
[20:25:21 CEST] <kepstin> thebombzen: it's a 1.8ghz quad core atom of some vintage
[20:25:36 CEST] <thebombzen> but libx264's faster presets should be fast enough for realtime, but they're going to be lower-quality than the slower presets that aren't intended for realtime
[20:25:45 CEST] <thebombzen> they should still be better than mpeg2video in quality as well
[20:26:00 CEST] <thebombzen> -preset veryfast, for example, will be much faster.
[20:26:07 CEST] <thebombzen> and also much better quality than mpeg2video
[20:26:26 CEST] <thebombzen> er, not faster, but better quality and fast enough for your needs.
[20:26:51 CEST] <thebombzen> if -preset veryfast isn't fast enough, then you can try -preset superfast as well. only go to ultrafast if you have to - it's basically the nuclear option.
[20:27:22 CEST] <kepstin> Hopper_: so yeah, I assume your transmitter handles padding internally, so just something like '-b:v 9M -vbv-maxrate 9M -bufsize 18M' or something should be good enough
[20:27:24 CEST] <thebombzen> ultrafast is far far faster than superfast and also far far worse quality. usually it's not worth it
[20:27:39 CEST] <kepstin> err, '-b:v 9M -maxrate 9M -bufsize 18M'
[20:27:48 CEST] <kepstin> I think, always forget those option names?
[20:27:53 CEST] <kepstin> probabyl still have it wrong
[20:28:03 CEST] <thebombzen> bufsize is right, I have no idea about maxrate
[20:29:23 CEST] <kepstin> yeah, it's just -maxrate
[20:31:23 CEST] <furq> 19:05:25 ( nicolas17) why does conversion from yuvj422p give this warning from swscaler? "deprecated pixel format used, make sure you did set range correctly"
[20:31:34 CEST] <furq> nicolas17: using yuvj pixel formats for full range isn't recommended any more
[20:31:42 CEST] <furq> it's recommended to use yuv422p and -color_range
[20:31:56 CEST] <furq> yuvj will probably still work though
[20:32:06 CEST] <kepstin> furq: sure, but this message is printed without the user manually specifying color ranges at all, in the ffmpeg cli
[20:32:17 CEST] <furq> if the source is yuvj then yeah
[20:32:36 CEST] <furq> as usual, ffmpeg and the libs disagree on what the right thing to do is
[20:32:41 CEST] <kepstin> furq: source is mjpeg, being decoded by ffmpeg :/
[20:32:42 CEST] <furq> or more accurately ffmpeg just doesn't give a shit
[20:33:09 CEST] <nicolas17> my source is a folder full of yuvj422p .JPG files
[20:35:10 CEST] <furq> does color_range actually do anything as an output option
[20:35:18 CEST] <furq> the docs list it as input/output but only document what it does as an input option
[20:35:21 CEST] <kepstin> it sounds like the issue is that ffmpeg's mjpeg/jpeg decoder is producing frames containing yuvj422p instead of yuv422p with color range set.
[20:35:35 CEST] <furq> well that's expected isn't it
[20:35:40 CEST] <nicolas17> what is yuvj anyway?
[20:35:57 CEST] <furq> yuv with full colour range
[20:36:04 CEST] <kepstin> nicolas17: yuv with the "jpeg" color range (aka full range aka pc range)
[20:36:29 CEST] <furq> as opposed to limited/mpeg/tv range
[20:36:38 CEST] <furq> which is what video normally uses
[20:37:02 CEST] <kepstin> nicolas17: anyways, you don't need to do anything - the decoder's using some deprecated stuff internally, but the scaler is still handling it correctly, so everything works.
[20:37:15 CEST] <furq> are you converting to yuv422p
[20:37:20 CEST] <furq> or is that being autoinserted
[20:39:15 CEST] <nicolas17> furq: if I do nothing, and I just turn this folder of JPEGs into -f mp4 -c:v libx264, I get yuvj422p in the video too, and these warnings:
[20:39:24 CEST] <nicolas17> deprecated pixel format used, make sure you did set range correctly
[20:39:26 CEST] <nicolas17> No pixel format specified, yuvj422p for H.264 encoding chosen.
[20:39:27 CEST] <nicolas17> Use -pix_fmt yuv420p for compatibility with outdated media players.
[20:40:07 CEST] <nicolas17> and if I pass "-pix_fmt yuv420p" I get only the first line of warning, about "deprecated pixel format used"
[20:40:38 CEST] <furq> that is pretty dumb then
[20:41:31 CEST] <nicolas17> I'm not using git master though, I'm on 3.2.6 from Debian testing
[20:42:24 CEST] <furq> yeah i'm messing with it on 3.3
[20:42:40 CEST] <furq> i can't find any way to use -color_range to silence that warning
[20:42:51 CEST] <Hopper_> So what are the options under -preset, and what do they change?
[20:42:52 CEST] <nicolas17> in my little test program using the API, I was using swscale to convert the frame to RGB24 to output a .ppm
[20:42:55 CEST] <nicolas17> same warning
[20:43:02 CEST] <furq> Hopper_: http://dev.beandog.org/x264_preset_reference.html
[20:43:07 CEST] <Hopper_> ty
[20:43:27 CEST] <furq> you should generally use the slowest preset that's fast enough
[20:43:39 CEST] <furq> depends how much bitrate you have though
[20:43:51 CEST] <furq> 9mbit for 720p should be pretty forgiving
[20:44:15 CEST] <Hopper_> How do I determine the correct preset based on that table?
[20:44:20 CEST] <furq> you don't
[20:44:28 CEST] <furq> you run slower presets until one is too slow
[20:44:50 CEST] <nicolas17> test until your particular CPU can't keep up with real-time encoding
[20:45:04 CEST] <furq> the slow presets are mostly there for archival
[20:45:11 CEST] <Hopper_> Okay, I really don't need it to be in real time.
[20:45:18 CEST] <furq> or situations where you can't just give it more bitrate
[20:45:32 CEST] <kepstin> if you're capturing from a camera in realtime, you kind of do need the encoding to work in realtime?
[20:45:37 CEST] <furq> also yeah
[20:45:52 CEST] <furq> except without "kind of"
[20:46:19 CEST] <Hopper_> That makes sense, jut trying to figure out the most effective method for using ffmpeg.
[20:46:20 CEST] <kepstin> well, if it's too slow, you might just get dropped frames and weirdness, but it could still technically work i guess :/
[20:46:35 CEST] <furq> it's more likely you'll just run out of buffer and then it will die
[20:46:42 CEST] <furq> either is generally bad
[20:47:25 CEST] <furq> veryfast is probably fine for this use case anyway
[20:47:27 CEST] <Hopper_> Dropped frames are not that much of an issue for me here.
[20:47:29 CEST] <Hopper_> Okay
[20:47:34 CEST] <Hopper_> I'll stick with that for now.
[20:47:54 CEST] <furq> they'll all look equally good if you give them enough bitrate
[20:48:20 CEST] <furq> except maybe ultrafast which pretty much turns everything off
[20:52:48 CEST] <Hopper_> Okay, here's the latest function. https://pastebin.com/PqCbXW9Q
[20:53:18 CEST] <Hopper_> I run that and it produces and output file just fine, but maxes my cpu which apparently causes some kind of instability and my machine crashes.
[20:54:19 CEST] <nicolas17> oh, two cameras?
[20:54:23 CEST] <Hopper_> Yup
[20:54:39 CEST] <nicolas17> in what computer did you run that? in the one that will be the sender?
[20:54:45 CEST] <Hopper_> Yes
[20:55:15 CEST] <nicolas17> how much RAM do you have?
[20:55:29 CEST] <nicolas17> seems unlikely to be a RAM-heavy encode, but...
[20:55:57 CEST] <nicolas17> (slower presets keep more frames in memory and can use a lot; you're in veryfast, that shouldn't happen)
[20:58:06 CEST] <Hopper_> It doesn't seem to be ram, watching the resources CPU is maxed ram is just over half.
[21:00:34 CEST] <Hopper_> Can I reduce the framerate somewhere? It seems to be keeping far more frames than needed.
[21:02:11 CEST] <nicolas17> I don't think -framerate is a valid output option
[21:02:22 CEST] <nicolas17> try -r 30 for the output instead of -framerate 30
[21:02:50 CEST] <nicolas17> for some reason it's grabbing 120fps from video0, and making the output framerate 120 too
[21:03:10 CEST] <Hopper_> That's it!
[21:03:35 CEST] <Hopper_> That reduced my cpu usage by ~30%!
[21:03:55 CEST] <nicolas17> if the input and filter is really producing 120fps, using -r 30 in the output will drop frames to reach 30fps
[21:04:34 CEST] <nicolas17> I was going to suggest going from 30 to 15fps as a test, then I realized from your paste that it was currently using 120 instead of 30 :P
[21:04:57 CEST] <Hopper_> Ya, that ends up being great, can I get it to only use 30 on the input to save even more overhead?
[21:05:03 CEST] <Hopper_> Or is that what it is doing now?
[21:05:21 CEST] <nicolas17> I don't know why it's grabbing from video0 at 120fps and from video1 at 30fps, you have the same options on both
[21:06:27 CEST] <furq> [video4linux2,v4l2 @ 0x1bcc740] The driver changed the time per frame from 1/30 to 513/61612
[21:06:30 CEST] <furq> wtf
[21:06:33 CEST] <Hopper_> They are different pieces of hardware, but I do have the same input specified, like you said.
[21:07:00 CEST] <nicolas17> "the driver ignored your request and did whatever it felt like doing"
[21:07:02 CEST] <nicolas17> great
[21:07:36 CEST] <Hopper_> That's what I get for using a no name camera.
[21:08:07 CEST] <nicolas17> anyway, x264 encoding will be the most CPU-intensive part, and you're now doing that at 30fps
[21:08:25 CEST] <nicolas17> I think you wouldn't gain much more by making the input 30fps too
[21:08:36 CEST] <furq> fyi if you're doing 480p then you should be able to use rawvideo instead of mjpeg
[21:08:47 CEST] <furq> that might inadvertently fix the framerate issue as well
[21:09:16 CEST] <Hopper_> furq: I kept knocking the res down to make it functional.
[21:09:16 CEST] <nicolas17> oh, get raw video from the cameras?
[21:09:19 CEST] <furq> that will be marginally faster and higher quality
[21:09:43 CEST] <nicolas17> it's definitely worth a try
[21:09:45 CEST] <Hopper_> Here's the misbehaving camera. http://a.co/cKGGsiG
[21:10:09 CEST] <Hopper_> Raw wont take more CPU power?
[21:10:19 CEST] <furq> it'll take less
[21:10:33 CEST] <nicolas17> will take more USB bandwidth, but at 640x480 it probably can cope
[21:10:35 CEST] <furq> the issue with rawvideo over usb2 is the data rate
[21:10:39 CEST] <furq> but at 480p it should be fine
[21:10:40 CEST] <Hopper_> Let's hope the USB bus has enough IO for that.
[21:10:43 CEST] <furq> right
[21:10:49 CEST] <nicolas17> worth a try in any case
[21:11:17 CEST] <furq> 480p30 yuv420p is only about 110mbps
[21:11:19 CEST] <furq> which should be fine
[21:11:26 CEST] <furq> usb2 is normally good for 250ish
[21:11:47 CEST] <Hopper_> so -vcodec what?
[21:11:54 CEST] <furq> nothing
[21:11:57 CEST] <furq> get rid of -vcodec
[21:12:00 CEST] <furq> it should default to rawvideo
[21:12:02 CEST] <Hopper_> Okay
[21:12:43 CEST] <nicolas17> furq: on some devices like the raspberry pi you would need to fit both cameras into that 250
[21:12:44 CEST] <Hopper_> 146Mbps
[21:12:58 CEST] <furq> this is a pc iirc
[21:12:59 CEST] <nicolas17> since all USB ports end in a hub
[21:13:33 CEST] <furq> oh right it's yuv422p
[21:13:41 CEST] <furq> 146mbps is right then
[21:14:14 CEST] <nicolas17> if this device has actual independent usb2 ports, that should work fine
[21:14:37 CEST] <furq> i guess we'll find out
[21:15:10 CEST] <Hopper_> It's functioning but I'm getting LOTS of "past duration too large" lines.
[21:15:23 CEST] <furq> you can usually ignore those
[21:15:50 CEST] <furq> as long as the output looks ok and you're still encoding at realtime
[21:16:11 CEST] <Hopper_> It seems just fine.
[21:16:53 CEST] <nicolas17> what CPU%?
[21:16:57 CEST] <Hopper_> This method uses less CPU, but seems to be more volatile, there are ore spikes.
[21:16:58 CEST] <furq> add -v error if you want to shut those warnings up
[21:17:17 CEST] <furq> and yeah if you have >2 usb ports you might want to switch the ports you're using around
[21:17:21 CEST] <Hopper_> Thanks, I might do that.
[21:17:30 CEST] <Hopper_> Why switch the ports?
[21:17:37 CEST] <furq> you'll want the devices on different hubs
[21:17:50 CEST] <Hopper_> ~58% cpus average.
[21:18:10 CEST] <nicolas17> if both cameras are connected to a single (possibly internal) hub, your USB I/O will be quite more limited
[21:18:11 CEST] <Hopper_> One is on the usb2 one is on the usb1, I assume they're different busses, I didn't check.
[21:18:41 CEST] <nicolas17> oh, usb1? I'd probably keep that one as mjpeg :D
[21:19:08 CEST] <Hopper_> It seems to be functioning, why would there be an issue?
[21:19:09 CEST] <nicolas17> you mean usb 1.1 standard, or "ports 1 and 2"?
[21:19:14 CEST] <Hopper_> 1.1
[21:19:20 CEST] <Hopper_> A not-blue port.
[21:19:38 CEST] <nicolas17> I thought blue meant usb3
[21:20:03 CEST] <Hopper_> I'm wrong! it's usb 3 and usb 2.
[21:20:12 CEST] <Hopper_> nicolas17: Good call.
[21:20:15 CEST] <nicolas17> yeah that makes more sense
[21:20:20 CEST] <nicolas17> usb1 is 12Mbps, it certainly won't cope with rawvideo
[21:21:40 CEST] <Hopper_> Can't do 720 in raw, the driver dropps the framerate to 9.
[21:21:52 CEST] <furq> yeah that's way too much
[21:21:56 CEST] <furq> for 720 you'll need mjpeg
[21:22:06 CEST] <Hopper_> Okay,
[21:22:15 CEST] <nicolas17> can ffmpeg decode on separate threads?
[21:22:21 CEST] <furq> width * height * bpp * framerate
[21:23:08 CEST] <furq> (1280 * 720 * 16 * 30) / 1000000 = much more than 300
[21:23:21 CEST] <furq> 442 apparently
[21:27:02 CEST] <Hopper_> What do the brown past duration lines actually mean?
[21:27:11 CEST] <thebombzen> furq: fyi, sometimes cameras will output 10 fps for rawvideo even if USB2 should be able to handle 30 fps
[21:27:35 CEST] <thebombzen> also, where did you get 16 bits per pixel?
[21:27:39 CEST] <furq> yuv422p
[21:27:54 CEST] <thebombzen> *does some quick arithmetic*
[21:28:19 CEST] <furq> http://vpaste.net/ZXbYK
[21:28:23 CEST] <furq> hi
[21:28:58 CEST] <thebombzen> yea, 422 at 8 is 16bpp
[21:29:03 CEST] <thebombzen> just did the arithmetic
[21:29:11 CEST] <thebombzen> although it's probably yuyv422 rather than yuv422p
[21:29:22 CEST] <thebombzen> I haven't seen yuv422p in a while
[21:29:30 CEST] <kepstin> Hopper_: basically, those 'past duration' lines mean that for some reason, frames are being output from the filter chain faster than the framerate ffmpeg was configured for, and it's therefore dropping them
[21:29:55 CEST] <kepstin> Hopper_: if one of your cameras is outputting higher fps than you asked it to, that might cause that message.
[21:30:28 CEST] <Hopper_> Heh, ya the one that's spitting out 120 even when I asked the driver for 30.
[21:30:50 CEST] <thebombzen> You might want to prepend an fps filter before you feed it to hstack
[21:30:55 CEST] <furq> is it still doing 120 with rawvideo
[21:31:10 CEST] <Hopper_> What furq said.
[21:31:21 CEST] <thebombzen> and fps filter will duplicate/drop frames to force a constant framerate
[21:31:29 CEST] <thebombzen> so if you're receiving 120 and only wnat 30, fps=30 will drop frames to get 30
[21:35:35 CEST] <Hopper_> One of my cameras is on a slight delay, I assume that's driver based and not ffmpeg, right?
[21:35:45 CEST] <nicolas17> probably
[21:36:35 CEST] <kepstin> Hopper_: ffmpeg opens the inputs one at a time, so there can be a slight delay on one if the first is slow to initialize, i guess.
[21:40:09 CEST] <Hopper_> You folks are awesome, thanks so much for all of your help. I'm certainly not done, but want to show my appreciation for all of your efforts so far.
[21:40:53 CEST] <Hopper_> And thanks to this wonderful community ffmpeg will be getting a donation from me.
[22:00:31 CEST] <danieeel> the i420 format is full-range or limited range? i can not get my picture to be shown correctly :/
[22:01:09 CEST] <kepstin> danieeel: could be either, you need additional metadata to know. I'd expect it to normally be limited range when dealing with video.
[22:01:21 CEST] <furq> there is no such format in ffmpeg
[22:01:26 CEST] <furq> but yeah it's probably limited range
[22:01:42 CEST] <kepstin> i420 is equivalent more or less to yuv420p in ffmpeg, yeah
[22:01:50 CEST] <furq> it could be yuvj420p though right
[22:01:58 CEST] <danieeel> yes, it is yuv420p
[22:02:10 CEST] <furq> that's limited range then
[22:02:14 CEST] <BtbN> yuvj420p is just yuv420p with jpeg color range. The pixel format is identical
[22:02:18 CEST] <furq> right
[22:02:20 CEST] <kepstin> but yuvj420p is deprecated, it should be yuv420p with the range set appropriately ;)
[22:02:24 CEST] <furq> i420 by itself isn't enough information
[22:02:33 CEST] <furq> yuv420p is definitely limited range though
[22:02:46 CEST] <furq> kepstin: i still haven't figured out how you set the range appropriately
[22:02:49 CEST] <BtbN> it's nothing. The color_space defines what it is
[22:03:00 CEST] <furq> do you mean -color_range
[22:03:09 CEST] <BtbN> something like that
[22:03:10 CEST] <furq> i'm yet to figure out how to make that do anything
[22:03:15 CEST] <kepstin> furq: it's api internal stuff, not exposed on ffmpeg cli, really, aside as options to the scaler filters
[22:03:20 CEST] <furq> the docs are incredibly unclear about it
[22:03:32 CEST] <danieeel> when i export a frame from a broadcast camera it shows luma range 17-255
[22:03:39 CEST] <furq> er
[22:03:42 CEST] <danieeel> from mp4
[22:04:21 CEST] <kepstin> danieeel: most limited range video contains overshoot above white or below black, that's normal
[22:04:25 CEST] <kepstin> it'll be clipped during playback
[22:05:50 CEST] <nicolas17> what is AVProgram for?
[22:07:16 CEST] <danieeel> btw the choice of 601/709 standard and others can be enforced some way? (use case: read my raw and convert to rgb.... as reference checker)
[22:08:24 CEST] <JEEB> there's a colorspace filter in libavfilter, and then there's the zscale filter that does colorspace conversions and scaling with the zimg library
[22:08:49 CEST] <thebombzen> what's the recommended denoise filter of the day?
[22:08:50 CEST] <JEEB> I haven't used the colorspace filter myself, but it seems like some people like it. zimg is what I've been planning to use in my own things
[22:09:24 CEST] <thebombzen> JEEB: doesn't swscale also do colorspace converstion?
[22:09:42 CEST] <thebombzen> but yea what is the denoise filter that is recommended nowadays? there's a whole bunch of them
[22:09:48 CEST] <nicolas17> hmph
[22:09:51 CEST] <JEEB> yes, but I'm pretty much not sure it follows colorspace information
[22:10:03 CEST] <nicolas17> I created an output format context with avformat_alloc_output_context2
[22:10:07 CEST] <JEEB> nor am I sure that even if it does that, that I would want to use swscale
[22:10:11 CEST] <JEEB> I've used swscale before :P
[22:10:13 CEST] <nicolas17> added a video stream to it with avformat_new_stream
[22:10:20 CEST] <nicolas17> then called avformat_init_output
[22:10:29 CEST] <nicolas17> warning: Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[22:11:17 CEST] <nicolas17> was I supposed to set anything in AVStream.codecpar myself? I didn't touch any fields, dunno why it's complaining that I'm using AVStream.codec
[22:13:00 CEST] <JEEB> unless you're using that thing yourself it's an internal issue due to the usage of that variable
[22:18:57 CEST] <nicolas17> okay, I guess it complained because I *didn't* set things in codecpar
[22:19:07 CEST] <nicolas17> so it warned, and looked at codec (where I hadn't set anything either)
[22:38:01 CEST] <furq> thebombzen: probably hqdn3d
[22:38:32 CEST] <thebombzen> I did a quick test of atadenoise, hqdn3d, and pp=tn just to see
[22:38:35 CEST] <furq> or nlmeans but iirc that's really slow in lavf
[22:38:52 CEST] <furq> there's an opencl version in vapoursynth that i've used before
[22:38:55 CEST] <thebombzen> I found that hqdn3d was the most effective at denoising but it also was kind of blurry
[22:39:07 CEST] <furq> try nlmeans
[22:39:18 CEST] <furq> do it quick before durandal_1707 shows up and tells us it's only for animes
[22:39:33 CEST] <thebombzen> eh, I'll keep that in mind
[22:39:34 CEST] <nicolas17> furq: hush, don't highlight them
[22:39:39 CEST] <furq> it isn't only for animes
[22:40:01 CEST] <thebombzen> well if it's best at that, I'll keep that in mind. but speaking of things only useful for anime, the video I had was really noisy and nothing in lavfi really could handle it
[22:40:09 CEST] <furq> it isn't even best at that afaik
[22:40:14 CEST] <furq> not that i'd know
[22:40:15 CEST] <furq> animes are bad
[22:40:27 CEST] <thebombzen> so I ended up dumping the frames and batch filtering through waifu2x
[22:40:42 CEST] <thebombzen> slow as fuck, but kind of the nuclear option cause the video was so noisy
[22:41:17 CEST] <thebombzen> it works even on non-anime stuff too, even though it's designed to remove JPEG crud and not actual noise
[22:41:45 CEST] <nicolas17> nice
[22:41:48 CEST] <durandal_1707> you havent tried vaguedenoiser
[22:41:54 CEST] <thebombzen> correct, I have not
[22:41:56 CEST] <nicolas17> does waifu2x *need* a GPU?
[22:41:56 CEST] <thebombzen> what is that
[22:42:15 CEST] <thebombzen> nicolas17: no, but it's not fast enough to batch process large amounts of images without one
[22:42:30 CEST] <thebombzen> mine has CUDA support compiled in. I'm already half-done processing the ~3000 frames
[22:42:48 CEST] <durandal_1707> thebombzen: for animes
[22:42:48 CEST] <nicolas17> I'll need The Cloud then
[22:43:08 CEST] <nicolas17> my laptop has an Intel integrated GPU
[22:43:45 CEST] <thebombzen> durandal_1707: that answers what it's for but not what it is
[22:43:50 CEST] <thebombzen> is vaguedenoiser a filter in lavfi?
[22:43:57 CEST] <furq> !filter vaguedenoiser
[22:43:57 CEST] <nfobot> furq: http://ffmpeg.org/ffmpeg-filters.html#vaguedenoiser
[22:44:37 CEST] <thebombzen> how does vaguedenoiser stack up to waifu2x as a denoiser?
[22:44:50 CEST] <thebombzen> also does it work in realtime?
[22:45:01 CEST] <nicolas17> how do I encode video with the API?
[22:45:40 CEST] <nicolas17> to demux I just pass a URL to avformat_open_input, to mux I don't see anything getting a URL, do I need to create my own IOContext and stuff?
[22:45:57 CEST] <durandal_1707> is waifu2x rt?
[22:48:46 CEST] <durandal_1707> vaguedenoiser could be speed up with frame threading
[22:52:58 CEST] <furq> you mean like with vapoursynth
[22:53:39 CEST] <thebombzen> is it rt? I'm not sure what you mean by rt
[22:53:49 CEST] <thebombzen> oh realtime
[22:53:52 CEST] <thebombzen> no it's not
[22:54:16 CEST] <thebombzen> does vaguedenoiser work as effectively as a denoiser as waifu2x? and does vaguedenoiser work in realtime?
[22:56:22 CEST] <durandal_1707> i dunno who is your waifu
[22:59:24 CEST] <thebombzen> durandal_1707: I don't have a waifu. I just meant at denoising anime screenshots
[23:00:59 CEST] <thebombzen> durandal_1707: what are the value ranges of vaguedenoiser's threshold?
[23:02:03 CEST] <durandal_1707> ffmpeg -h filter=vaguedenoiser
[23:02:39 CEST] <thebombzen> well it goes from 0 to DBL_MAX
[23:02:42 CEST] <thebombzen> what are *sane* values?
[23:03:14 CEST] <thebombzen> durandal_1707: I assume I'm not supposed to enter 7.8 * 10^9
[23:04:15 CEST] <durandal_1707> something up to 15
[23:06:18 CEST] <leif> I'm tryin to figure out a memory leak whenever I use the 'trim' filter (as mentioned a few hours ago), anyway, it looks like I get an EOF on the output buffer, but the input buffer still seems to happily accept packets. Is that expected behavior?
[23:06:24 CEST] <leif> errr...frames, not packets.
[23:06:29 CEST] <leif> It still accepts frames
[23:09:19 CEST] <durandal_1707> leif: full comnand line output missing
[23:12:30 CEST] <leif> durandal_1707: I'm using the libavfilter API rather than the command line.
[23:12:58 CEST] <leif> The command line output isn't particularly illuminating, here it is:
[23:13:12 CEST] <leif> https://gist.github.com/LeifAndersen/74a0eeff053ee6f425a3c87b681c73ed
[23:14:47 CEST] <thebombzen> durandal_1707: Thanks. It wasn't quite realtime, but it worked well
[23:15:00 CEST] <thebombzen> I discovered that waifu2x doesn't have temporal consistency so it's not good to use on framedumps
[23:15:12 CEST] <thebombzen> it works fine on individual images though
[23:18:26 CEST] <leif> Hmm...looking at ffmpeg.c's `transcode_from_filter` function, it looks like once a sink returns EOF, you're supposed to stop putting frames into the source.
[23:18:44 CEST] <leif> Now I just need to figure out how to map sinks to their sources (in case of the copy/acopy filter)
[23:18:47 CEST] <leif> Anyway, thanks.
[23:20:58 CEST] <leif> Oh.... `av_buffersrc_get_nb_failed_requests`...so that's what that's used for. :)
[00:00:00 CEST] --- Thu Jul 6 2017
More information about the Ffmpeg-devel-irc
mailing list