[Ffmpeg-devel-irc] ffmpeg.log.20191123

burek burek at teamnet.rs
Sun Nov 24 03:05:03 EET 2019


[00:41:56 CET] <Pie-jacker875> is avc and opus in an mp4 container a thing?
[00:42:25 CET] <JEEB> sure
[00:42:50 CET] <Pie-jacker875> I'm messing about with obs' ffmpeg output
[00:42:58 CET] <JEEB> FFmpeg will only create the opus part with -strict experimental, but effectively all vendors (browsers f.ex.) have added support
[00:43:04 CET] <Pie-jacker875> ahh okay
[00:43:05 CET] <Pie-jacker875> thanks
[00:44:02 CET] <Pie-jacker875> now to just figure out how to format the -strict experimental in obs' encoder settings field
[00:44:32 CET] <BtbN> You don't I'm pretty sure
[00:44:56 CET] <Pie-jacker875> well just typing "-strict -2" isn't working there
[00:45:07 CET] <BtbN> Since it's not CLI options, but directly setting avoptions on the encoder
[00:49:37 CET] <Pie-jacker875> alright I'm lost
[00:50:46 CET] <BtbN> If they do not have an option to enable experimental stuff, it's straight up not possible.
[00:51:36 CET] <Pie-jacker875> it's a text field where you set encoder settings and it doesn't support that option for some reason?
[00:51:50 CET] <BtbN> It's not the ffmpeg.c CLI, no
[00:52:01 CET] <Pie-jacker875> okay then
[00:52:01 CET] <BtbN> it's just parsed by OBS to set avoptions on the encoder.
[01:37:16 CET] <slidercrank> hello! I want to combine two videos by placing them next to each other. Unfortunately their sizes, rotation and length (time) do not match. Is it possible to do it in one go? Or should I do it it multiple steps? Cutting the beginning, rotating and then combining with rescaling?
[01:38:36 CET] <shibboleth> will i have to run xorg as whichever user is using ffmpeg api to use hwaccel?
[02:00:39 CET] <slidercrank> disregard my question. I've figured it out by reading docs, forums and experimenting
[02:07:16 CET] <slidercrank> a strange thing.
[02:07:19 CET] <slidercrank> $ ffmpeg -ss 00:00:03 -i left.mp4 -i right.mp4   -filter_complex "[1:v]transpose=1,scale=-2:1280,setsar=1[right];[0:v][right]hstack"  -t 5 out.mp4
[02:07:52 CET] <slidercrank> the problem I am facing is the left video is shorter (in height) then the right video after combining them
[02:07:58 CET] <slidercrank> *than
[02:09:10 CET] <slidercrank> the output video has the height of the left video (as needed) but the left video becomes shorter in height (padded with black space above)
[02:09:50 CET] <slidercrank> but I don't want this padding. how do I do it?
[02:10:49 CET] <furq> slidercrank: pastebin ffprobe -show_streams left.mp4
[02:12:06 CET] <slidercrank> https://paste.ubuntu.com/p/CNJgnRKZQc/
[02:14:31 CET] <furq> this is already portrait, why are you transposing it
[02:14:57 CET] <slidercrank> when I play the video, it's in landscape mode
[02:14:59 CET] <furq> wait nvm that's right.mp4
[02:15:11 CET] <slidercrank> yes, that's right.mp4
[02:15:15 CET] <furq> yeah ffprobe that then
[02:15:56 CET] <slidercrank> that's probe right https://paste.ubuntu.com/p/JmRVs4vW62/
[02:16:57 CET] <slidercrank> the same scene was taken with two smartphones (different manufacturers). so everything is different about the videos (resolution and such). but I need to combine them.
[02:17:32 CET] <slidercrank> to show what the users of both smartphones saw at the same time
[02:19:40 CET] <slidercrank> I'm using -t 5 because I'm still testing. Before I am ready to convert the whole video(s)
[02:20:11 CET] <slidercrank> furq, the padding in the left picture https://i.imgur.com/pf35OkX.jpg
[02:23:22 CET] <furq> i don't see anything obvious and none of those filters should introduce padding
[02:23:49 CET] <furq> is it still there with -i right.mp4 -vf transpose=1 -frames:v 1 out.png
[02:24:46 CET] <slidercrank> why right.mp4? the padding appears for left.mp4
[02:25:14 CET] <slidercrank> when I issue your command (how you wrote it), there is no padding
[02:27:54 CET] <furq> sorry i've apparently forgotten how cardinal directions work today
[02:28:03 CET] <furq> that's even more confusing that it's happening to the one that isn't being filtered
[02:28:57 CET] <slidercrank> thank you for taking your time helping me
[02:29:27 CET] <furq> i guess try -i left.png -frames:v 1 out.png and see if the padding shows up
[02:29:49 CET] <furq> not sure what you'd do if it does other than crop it
[02:31:05 CET] <slidercrank> ffmpeg -i left.mp4 -frames:v 1 out.png produces a black image
[02:31:33 CET] <slidercrank> maybe the first frame is black
[02:31:40 CET] <furq> yeah add -ss 5 or something
[02:32:12 CET] <slidercrank> yes, the padding is added above
[02:33:08 CET] <slidercrank> https://i.imgur.com/vyqciuf.png
[02:33:28 CET] <slidercrank> that's really strange. because in the video there is no padding
[02:34:42 CET] <slidercrank> oh damn! there is.
[02:35:07 CET] <slidercrank> what a strange phone that records with a padding :)
[02:35:15 CET] <slidercrank> so I'll crop it then
[02:35:41 CET] <slidercrank> furq, thank you very much
[02:50:24 CET] <slidercrank> ffmpeg rules
[03:10:32 CET] <slidercrank> furq, btw, if you are interested what video that was https://imgur.com/a/6050Jyd the result of combining
[03:11:34 CET] <slidercrank> indoor navigation with augmented reality
[06:15:47 CET] <while> Hi, I am trying to find the correct syntax of ffmpeg/ffplay to decode a UDP mpegts stream with 2 raw streams, one a raw yuv420p video stream, and the other a pcm s16le audio stream
[06:16:32 CET] <while> I don't think I am getting the syntax correct to properly classify the 1st stream (containing the headerless raw video) so ffplay/ffmpeg can properly process it
[06:19:07 CET] <furq> mpegts doesn't support rawvideo
[06:19:28 CET] <furq> going to assume the next question is "how come ffmpeg will mux it into mpegts then" and the answer is i don't know
[06:20:04 CET] <furq> nor do i know why nobody has done anything about this in the six years since the mailing list thread about this
[06:22:59 CET] <while> so mpegts is not a codec-agnostic blunt solution for containerizing/multiplexing?
[06:23:30 CET] <furq> not quite
[06:25:10 CET] <while> is there anything like mpegts but more rugged?
[06:26:33 CET] <furq> not really if you need it to be streamable
[06:26:36 CET] <furq> otherwise mkv or maybe nut
[06:27:58 CET] <while> thank you
[06:32:26 CET] <while> I found that `ffmpeg -f mpegts -vcodec rawvideo -i udp://127.0.0.1:30000 -map:v 0:0 delete.yuv' properly decodes mpegts stream 0 containing the raw video to properly decode the stream
[06:35:52 CET] <furq> i tried that and it didn't work
[06:35:55 CET] <furq> so who knows what's going on there
[06:36:37 CET] <while> what command did you use to create the udp stream?
[06:39:39 CET] <while> I mean, my command only works with my existing udp stream at 127.0.0.1:30000, if you don't have said udp stream, which is specifically mpegts, with the 0th stream being of rawvideo, it shouldn't work
[06:40:16 CET] <furq> i didn't make a udp stream, i just made a rawvideo .ts with ffmpeg
[06:43:37 CET] <while> with raw video in the 1st stream encapsulated within mpegts?
[06:43:43 CET] <furq> yeah
[06:45:46 CET] <while> well, my command snipit may be relying on something akin to undefined behavior, I'm using the ffmpeg-bearing package just from the debian repositories
[06:51:38 CET] <while> also my example appear to just copy the data, instead of doing any conversion with it, rawvideo is complicated to process beyond copying it, of course
[09:20:52 CET] <UserNaem0> i have a question about video filters: how do i apply a filter only to frames that are "similar" to the previous frame (i.e. duplicate frame with slightly different encoding artifacts)?
[09:30:15 CET] <UserNaem0> whoops, i logged off accidentally, i hope i didn't miss anything
[09:31:57 CET] <Reinhilde> you didnt
[10:06:51 CET] <Arnob> test
[10:07:11 CET] <Arnob> test 2
[11:20:06 CET] <jemius> [libopus @ 0x5632234b6920] Invalid channel layout 5.1(side) for specified mapping family -1.
[11:20:10 CET] <jemius> what might this mean?
[14:19:54 CET] <Weasel_> jemius: https://trac.ffmpeg.org/ticket/5718
[20:43:10 CET] <Pie-jacker875> yo I was here last night asking about strict experimental on obs. seems like it works when you put it in the muxer settings field
[20:43:13 CET] <Pie-jacker875> it's working
[21:11:39 CET] <jemius> when compressing jpegs, what's the difference between -compression_level and -qscale ?
[21:12:42 CET] <JEEB> whether you're setting one option or the other in the configuration context? and then if the difference matters at all depends on the module
[21:17:06 CET] <JEEB> ok, so q:v or qscale sets the global_quality value in the encoder it seems
[21:18:38 CET] <furq> jemius: compression_level is generally for lossless codecs, q/qscale is generally for lossy codecs
[21:21:54 CET] <JEEB> I must say, I am surprised now that I stumbled upon ff_mpv_encode_init
[21:22:16 CET] <JEEB> this one function encompasses a crapload of formats
[21:22:51 CET] <JEEB> it does utilize the qscale flag which q/qscale indeed sets
[21:23:30 CET] <JEEB> but yea, I didn't find any notices of the compression_level option anywhere :P
[21:23:42 CET] <JEEB> since yes, it seemed to be a lossless compression thing regarding "how hard to attempt to compress this"
[21:24:01 CET] <JEEB> but whether a module actually utilizes it or not you'd only figure out by reading the code
[21:24:13 CET] <JEEB> unfortunately the encoders do not exactly export the information of which things they actually read/follow
[21:27:37 CET] <furq> yeah it's a shame that stuff is so poorly documented
[21:30:40 CET] <JEEB> AVOptions let you peek into what a module takes in, but harder to do that for a gigantic structure that is just there
[21:30:55 CET] <JEEB> and if you start manually documenting, it might or might not be correct
[22:51:58 CET] <void09> JEEB: how did you say I could find timestamps of decoding errors ?
[22:52:16 CET] <void09> I forgot what was said, but I think I did not have success with any of the methods suggested
[00:00:00 CET] --- Sun Nov 24 2019


More information about the Ffmpeg-devel-irc mailing list