[Ffmpeg-devel-irc] ffmpeg.log.20190926

burek burek at teamnet.rs
Fri Sep 27 03:05:03 EEST 2019


[00:05:35 CEST] <DHE> -vf crop=newwidth:newheight:xoffset:yoffset (leaving out the offsets defaults to a center-of-frame crop)
[00:22:16 CEST] <sagax> thanks
[01:26:36 CEST] <brimestone> Hey guys, I have this file with this ffprobe data https://gist.github.com/brimestoned/728b975a913c16c74a1c65e31c4ef46a how do I use AVFormatContext to get specific values from the metadata shown on that link?
[01:32:57 CEST] <klaxa> brimestone: https://git.videolan.org/?p=ffmpeg.git;a=blob;f=fftools/ffprobe.c;h=238041722974694de1af1f7b18ceced23ec677d0;hb=refs/heads/master#l2469
[01:33:22 CEST] <brimestone> Wow thanks!
[01:33:34 CEST] <klaxa> that's the exact code that prints what you pasted
[01:33:47 CEST] <klaxa> you should be able to work it from there ;)
[01:36:41 CEST] <brimestone> How do I remove the number on the left from the view?
[01:37:05 CEST] <brimestone> Got it.. nvm
[05:03:59 CEST] <ls3dev33> hi
[05:04:23 CEST] <ls3dev33> hey guys.. good day. I am capturing a bunch of files that are already h264 .. problem is on channel changes video lags on start. Is there a way to segment the input by i-frame to speed up the channel changes ? currently i tried to segment into 6 second chunks but i guess the i-frame is an issue because doesn't always start at the 7th second mark
[05:04:33 CEST] <ls3dev33> i can't force key frames because I'm not transcoding but.. any way to say segment 6 seconds but then make sure the start of each file is always an i-frame
[05:22:33 CEST] <furq> -f segment -segment_time 6 out%04d.mp4
[05:22:45 CEST] <furq> the segments won't be six seconds long, they'll be as close as possible but cut on an idr frame
[05:29:14 CEST] <lain98> < DHE> lain98: have you actaully run something like avformat_find_stream_info to populate it?
[05:29:47 CEST] <lain98> yes i have. stream->codecpar->format is still -1
[05:31:28 CEST] <ls3dev33> furq than you will test.. does the segment_key also work ?
[05:31:36 CEST] <ls3dev33> i'm also encrypting it
[05:32:20 CEST] <ls3dev33> i'm using like -hls_list_size 5 -hls_key_info_file whatever.key
[05:34:47 CEST] <ls3dev33> also i'm using HLS segment and it doesn't allow those options
[06:23:09 CEST] <ls3dev33> @furq, i can't use keys with this segment version :(
[08:40:18 CEST] <MoziM> how can a video have seperate audio tracks for language?
[08:41:02 CEST] <JEEB> you have this thing called a container
[08:41:15 CEST] <MoziM> oh...
[08:41:25 CEST] <JEEB> a container can have multiple streams in it, including a video, audio or subtitles
[08:41:29 CEST] <JEEB> some containers are more limited than others
[08:41:33 CEST] <MoziM> it's beginning to make sense now -_-
[08:41:48 CEST] <JEEB> for example, FLV only takes one video and audio. On the other hand MPEG-TS or MP4 or Matroska can have multiple of all
[08:41:58 CEST] <MoziM> ya... so in this case i'm using the word "track" wrong right? an audio stream contains tracks
[08:42:06 CEST] <MoziM> interesting...
[08:42:07 CEST] <JEEB> well either word is "correct"
[08:42:14 CEST] <JEEB> in MP4 streams are called tracks
[08:42:23 CEST] <JEEB> in FFmpeg they're AVStreams
[08:42:24 CEST] <JEEB> :)
[08:42:38 CEST] <JEEB> so it depends on when something was come up with and what word someone came up with
[08:42:40 CEST] <MoziM> but when talking about selectable audio between french and english it's a stream isn't it?
[08:42:55 CEST] <JEEB> once again, either word is used in different places xD
[08:43:00 CEST] <JEEB> in MP4 they're tracks
[08:43:21 CEST] <JEEB> just saying that either word is not incorrect per se
[08:43:52 CEST] <JEEB> it all just means that there's multiple "pipes" of things in a container :)
[08:49:39 CEST] <MoziM> gotcha...
[09:54:38 CEST] <hendry> when recording my desktop, sometimes... I end up with a file that can't be played. mpv reports "moov atom not found" when trying to play it
[09:55:17 CEST] <hendry> when I try re-encode the file, I get Invalid data found when processing input"
[09:56:30 CEST] <hendry> here is the log file https://s.natalian.org/2019-09-26/1569483654.mp4.log
[09:58:29 CEST] <JEEB> hendry: basically you didn't let the writing finish?
[09:58:55 CEST] <JEEB> mp4 files without fragmentation have a single thing at the end of the file which is required to decode it
[10:00:10 CEST] <hendry> Killing the process should allow ffmpeg to finish writing? Or am I mistaken? https://github.com/kaihendry/recordmydesktop2.0/blob/master/x11capture#L70
[10:00:57 CEST] <hendry> https://github.com/kaihendry/recordmydesktop2.0/blob/master/x11capture#L70 sorry I do a   kill -INT $pid
[10:01:48 CEST] <hendry> Or is thre a better VAAPI accelerated intermediate format, other than mp4?
[11:13:15 CEST] <lain98> same question as yesterday. how do i get the correct stream->codecpar->format value. Most of the other data in codecpar seems to be populated correctly but format is -1, when i'm quite sure the video is yuv420p
[11:32:44 CEST] <Radiator> lain98 Are you reading or emitting a video ? Because if you are reading, you won't be able to set a pixel format. But emitting you will
[11:33:19 CEST] <Radiator> The format is set by the codec you are using. CUDA use NS12 for example
[11:43:53 CEST] <lain98> Radiator: i'm demuxing
[11:44:01 CEST] <lain98> i want to know the pixel format
[11:45:21 CEST] <Radiator> It's because it is the frame you receive that will give you the format
[11:45:24 CEST] <lain98> i want to know the pixel format for something else. i know that i wont be able to set it
[11:45:53 CEST] <lain98> so i cant know without decoding a frame ?
[11:45:59 CEST] <Radiator> Yup
[11:46:25 CEST] <Radiator> I mean, from what I know, you can only know the pix format once decoded
[11:47:28 CEST] <lain98> doesnt look like what the documentation says
[11:48:17 CEST] <lain98> int AVCodecParameters::format video: the pixel format, the value corresponds to enum AVPixelFormat. audio: the sample format, the value corresponds to enum AVSampleFormat.
[11:50:40 CEST] <Radiator> It might be related to muxing. SInce from what you receive it indicate AV_PIX_FMT_NONE which isn't good. Plus the stream isn't fairly used on demuxing since it is your codec context that will be called to retreive the packets
[11:51:22 CEST] <lain98> hmm okay
[11:51:39 CEST] <lain98> so it would be meaningful if i was muxing instead of demuxing.
[11:51:50 CEST] <lain98> and i have to set the format value when muxing
[11:51:52 CEST] <lain98> ?
[11:51:53 CEST] <Radiator> As well as your AVFormatContext
[11:52:37 CEST] <JEEB> generally in muxing you don't need that at least for video
[11:52:53 CEST] <Radiator> Yes you have to. Correct me if I'm wrong bu if you don't it finds a format suitable to the container
[11:53:03 CEST] <lain98> all of this was just a workground to get bit depth. \
[11:53:06 CEST] <JEEB> and if you really need the pix_fmt or sample format I would only trust a value received from an AVFrame in the decoder
[11:53:18 CEST] <JEEB> lain98: yes containers usually don't have that
[11:53:49 CEST] <lain98> okay.
[11:53:53 CEST] <JEEB> also I have streams that switch resolution in the middle for example.
[11:53:57 CEST] <Radiator> lain98: Look at AVPixFmtDescriptor might help you
[11:54:23 CEST] <JEEB> so if you "want the truth" you are in for pain
[11:54:23 CEST] <JEEB> but the best bet is to just decode the first sample
[11:54:23 CEST] <lain98> hmm
[11:54:24 CEST] <lain98> actually
[11:54:30 CEST] <JEEB> receive until you get an AVFrame, and boom
[11:54:37 CEST] <JEEB> you have something
[11:54:46 CEST] <Radiator> +1 JEEB
[11:54:49 CEST] <JEEB> for example with H.264 you can fully reconfigure the stream
[11:54:52 CEST] <lain98> i was using decoder api from nvidia. its giving me something wrong so i thought i could fallback on ffmpeg
[11:55:02 CEST] <JEEB> bit depth, pixel format, resolution
[11:55:05 CEST] <JEEB> it's all dynamic
[11:55:23 CEST] <Radiator> lain98: You can configure ffmpeg to use the NVIDIA SDK as well
[11:55:42 CEST] <Radiator> https://developer.nvidia.com/ffmpeg
[11:56:13 CEST] <BtbN> I wouldn't follow anything they write there about how to build ffmpeg though, it's pretty bad.
[11:56:20 CEST] <Radiator> But be aware that encoding with nvidia isn't the most efficient
[11:56:26 CEST] <lain98> its a pain in the butt
[11:56:34 CEST] <BtbN> Or rather, it's overly complicated and outdated.
[11:57:53 CEST] <BtbN> It's correct about the nv-codec-headers, but configure does not need any special flags once those are installed (to somewhere pkg-config finds them)
[11:57:54 CEST] <lain98> okay, i understand what i have to do. thanks
[11:57:54 CEST] <Radiator> BtbN: I wouldn't say so, unless you follow blindfully their instruction you can run into some deep issues. But just using their flags to use the SDK and the hwaccel is pretty straigth forward and easy to do
[11:58:06 CEST] <BtbN> Radiator, they instruct you to get the CUDA SDK for example. Which at this point, is entirely unused by ffmpeg.
[11:58:35 CEST] <BtbN> And cuvid is long half-deprecated in favor of nvdec
[11:59:02 CEST] <BtbN> same for scale_npp instead of scale_cuda
[11:59:22 CEST] <Radiator> BtbN Unless you do hardware acceleration which then is used - And yes, cuvid is the black sheep of it all as well as a lot more of their other flags
[11:59:45 CEST] <BtbN> "Unless you do hardware acceleration which then is used"?
[11:59:51 CEST] <BtbN> Not sure what you mean by that.
[12:00:23 CEST] <Radiator> Isn't nvdec and nvenc are used by ffmpeg ?
[12:00:29 CEST] <BtbN> Yes?
[12:00:32 CEST] <lain98> okay, ah yes i have another question
[12:01:13 CEST] <Radiator> BtbN: But for the configuraton of ffmpeg using nvidia you just need to use "--enable-cuda-sdk --enable-nvdec --enable-nvenc"
[12:01:33 CEST] <lain98> i was trying to demux a vp9+mp4 file. but AVStream->duration is something undefined
[12:01:58 CEST] <BtbN> Radiator, --enable-cuda-sdk is deprecated and effectively does nothing at this point. FFmpeg does not need or use the CUDA SDK.
[12:02:01 CEST] <lain98> even the ffmpeg application says its undefined
[12:02:22 CEST] <Radiator> BtbN Oh I didn't know that
[12:02:34 CEST] <BtbN> The last thing wich you still used to need the SDK for was nvcc, which was recently replaced with clang.
[12:02:39 CEST] <lain98> webm container with vp9 codec
[12:02:58 CEST] <lain98> ffmpeg output says "Duration: N/A, start: 0.000000, bitrate: N/A"
[12:03:10 CEST] <lain98> i got this video off youtube
[12:12:02 CEST] <lain98> okay, this is a know bug https://trac.ffmpeg.org/ticket/7800
[12:34:27 CEST] <lain98> webm is essentially mkv right ?
[12:53:27 CEST] <cehoyos> webm is a subset of mkv
[14:18:21 CEST] <kepstin> also note that the webm for youtube is built for dash streaming, and so might be missing some metadata typical of a standalone file.
[14:18:35 CEST] <kepstin> iirc youtube-dl normally remuxes it
[15:21:22 CEST] <taliho>    JEEB: Re: Probing issue from yesterday
[15:21:40 CEST] <taliho> JEEB: In case you are interested this solved it for me This patch solves it for me: http://ffmpeg.org/pipermail/ffmpeg-devel/2015-May/173594.html
[15:25:41 CEST] <JEEB> taliho: hmm, I'll test that out
[15:25:56 CEST] <JEEB> thanks for mentioning it, I will try to ping it if it works for me too
[15:26:00 CEST] <cehoyos> The patch breaks S302M decding afaict
[15:26:14 CEST] <JEEB> ah
[15:26:50 CEST] <JEEB> so we need like with various other formats figure the more specific things to check
[15:27:22 CEST] <JEEB> either for S302M or this stuff
[15:27:35 CEST] <taliho> yes, I followed the messages. S302M has a private stream that probed and set as a audio codec
[15:29:18 CEST] <taliho> michael proposed a patch that was merged, but it doesn't work for me because it still requires probing the private stream
[15:30:24 CEST] <taliho> this is michaels patch: https://ffmpeg.org/pipermail/ffmpeg-devel/2015-June/174035.html
[15:31:19 CEST] <taliho> for me this doesn't work because the private stream is extrememly low rate serialized protobuf data, so it doesn't make sense to probe it
[15:38:42 CEST] <JEEB> yea, probing works until it has enough data
[15:38:48 CEST] <JEEB> and with low bit rate streams that can be a minute or more
[15:45:53 CEST] <taliho> JEEB: my protobuf stream is ~50bits every 10 seconds. I'd have to wait days to get enough data for probing to exit :P
[15:51:55 CEST] <cehoyos> taliho: Did you play with analyzeduration and probesize?
[15:54:02 CEST] <JEEB> I think I played with those and probesize might have helped, but it would then of course be such a small amount of data that it wouldn't be getting much info on the other stuff
[15:54:16 CEST] <JEEB> analyzeduration calculates within the probesize
[15:54:29 CEST] <JEEB> if I find time to probe the radio streams I have access to I'll verify
[16:00:54 CEST] <taliho> cehoyos: analyzeduration and probesize only help when the demuxer is initialized in avformat_find_stream_info()
[16:02:14 CEST] <taliho> for me the issue is after initialization in ff_read_packet
[16:02:49 CEST] <taliho> cehoyos: here there is a hard coded parameter #define MAX_PROBE_PACKETS 2500
[16:03:21 CEST] <cehoyos> maybe this should be user-settable?
[16:04:57 CEST] <taliho> that's true, or I was thinking of adding an option that to disable probing of a private stream
[19:19:53 CEST] <fling> Is there an audio filter for denoise by a pattern?
[19:20:12 CEST] <fling> So I feed it with a pattern and it removes the noise from a stream
[20:57:21 CEST] <inna> hey
[20:57:43 CEST] <inna> can i cut out a piece from an mp4 without re-encoding?
[20:57:52 CEST] <DHE> yes, but your cut must start on a keyframe
[20:57:52 CEST] <inna> as in -v:copy
[20:57:56 CEST] <DHE> -c:v copy
[20:58:11 CEST] <inna> cool, thx
[20:59:43 CEST] <inna> can ffmpeg catch streams?
[21:20:49 CEST] <DHE> huh?
[22:19:22 CEST] <inna> never mind
[22:22:04 CEST] <arinov> inna: save streamed video into file?
[22:22:36 CEST] <inna> saving multiple .ts in an .m8u
[22:22:54 CEST] <arinov> dunno
[00:00:00 CEST] --- Fri Sep 27 2019


More information about the Ffmpeg-devel-irc mailing list