[Ffmpeg-devel-irc] ffmpeg.log.20180722

burek burek021 at gmail.com
Mon Jul 23 03:05:02 EEST 2018


[00:39:11 CEST] <cc0> hello, i'm trying to trim a video clip to only get the first half of it. so i want to do a trim filter but the trim duration must be stream_length/2. is there a variable for the length of the stream or do i need to precompute it and insert it into an ffmpeg command with a shell script
[00:49:42 CEST] <BtbN> There is no reliable way to get the duration other than going over the whole thing. So some pre-computation is required
[00:54:57 CEST] <furq> you can pull it from the container's duration field with ffprobe
[00:55:13 CEST] <furq> but the actual stream generally has no business knowing that
[00:55:39 CEST] <furq> also obviously this is assuming you trust that the stream is close enough to being the longest one in the container
[00:55:50 CEST] <furq> but it normally is
[02:36:04 CEST] <Shibe> what might EAGAIN mean for avcodec_receive_packet?
[02:36:11 CEST] <Shibe> could it possibly mean that the encoder isnt intialized properly?
[02:36:58 CEST] <cc0> thanks
[03:17:15 CEST] <DHE> Shibe: it means the encoder wants more input first
[03:17:26 CEST] <DHE> encoders don't necessarily guarantee a one in, one out pattern
[03:23:55 CEST] <SpeakerToMeat> If I have 2 streams with mono audio channels and I want a single stream out with a stereo channel, is there no way to do this without reencoding? other than outputing the two channels via stream copy to separate files, merging them with sox and using that as an input?
[03:26:30 CEST] <furq> uh
[03:26:36 CEST] <furq> that would still involve reencoding
[03:29:57 CEST] <SpeakerToMeat> hm
[03:31:21 CEST] <DHE> unless there's some codec-specific app that supports it, but no ffmpeg can't do that
[03:31:29 CEST] <SpeakerToMeat> nod
[03:31:46 CEST] <SpeakerToMeat> Is there any shortcut to tell ffmpeg "use the same audio codec as the source while doing this"?
[03:31:52 CEST] <furq> i don't know of any lossy codec that'll let you do that
[03:31:56 CEST] <furq> and no there isn't
[03:32:12 CEST] <SpeakerToMeat> Btw the source codec is lossless.
[03:32:14 CEST] <SpeakerToMeat> pcm
[03:32:33 CEST] <furq> oh
[03:32:43 CEST] <furq> well it's not really reencoding then
[03:32:58 CEST] <furq> there probably is a way to do that with pcm but who cares
[03:33:00 CEST] <furq> it'll come out the same either way
[03:33:38 CEST] <SpeakerToMeat> Yeah
[04:17:20 CEST] <SpeakerToMeat> I wonder where... 	Jean-Baptiste found the fourcc xd5b for XDCAM HD422 1080i60 50Mb/s
[04:17:27 CEST] <SpeakerToMeat> I can't find it in any basy online :/
[04:17:35 CEST] <SpeakerToMeat> Yet, mediainfo seems to concur
[04:32:12 CEST] <SpeakerToMeat> I also wonder how whoever implemented mxf support on ff didn't go insane.
[04:32:27 CEST] <SpeakerToMeat> The only thing that'd scare me to implement as much as mxf is probably dicom
[07:52:43 CEST] <boblamont> what's "unsafe" about  '/home/path/to/files/01_0_song-stereo.mp3'?
[09:22:19 CEST] <TekuConcept> ffmpeg 4.0: What is the replacement for AVCodecContext::gop_size? AVCodecParameters does not seem to hold something similar.
[10:27:15 CEST] <th3_v0ice> There is still a field gop_size in AVCodecContext.
[10:34:28 CEST] <Wuzzy> hey. i've seen AVCodec->codec is deprecated and must be replaced by AVCodec->codecpar
[10:34:59 CEST] <Wuzzy> but i see that codecpar is a different type (AVCodecParameters instead of AVCodecContext)
[10:35:44 CEST] <Wuzzy> so by simply replacing all ->codec with ->codecpar i have not fixed the problem. a ton of code breaks :(
[10:35:58 CEST] <Wuzzy> so .. any instructions on how to move from ->codec to ->codecpar?
[10:36:15 CEST] <Wuzzy> i have looked around but found nothing
[10:37:08 CEST] <Wuzzy> what makes matters difficult is that AVCocecContext and AVCodecParameters have very different structure with different names. and its not obvious to find the "equivalent" fields (if they even exist) :(
[10:37:29 CEST] <Wuzzy> looks like I'm stuck with deprecated code then :(
[10:38:11 CEST] <Wuzzy> oh i am talking of libavcodec, of course.
[10:49:08 CEST] <JEEB> Wuzzy: you can take a look at the transcoding example from doc/examples
[10:50:53 CEST] <JEEB> codecpar is what you use to transfer values from/to libavformat as far as I can see.
[10:51:39 CEST] <JEEB> you still need an avcodeccontext for decoding and encoding, but basically re-use of the stuff you get from libavformat was stopped by that
[10:52:59 CEST] <Wuzzy> yeah. still not sure how to replace that ->codec. documentation says "use ->codecpar" instead but i know its not that simple
[10:53:08 CEST] <Wuzzy> i look at the example now. not sure if it helps...
[10:53:21 CEST] <Wuzzy> thanks anyway
[10:53:44 CEST] <JEEB> avcodec_parameters_to_context and avcodec_parameters_from_context come to mind :P
[10:54:21 CEST] <JEEB> which I could notice from the example without further context
[10:55:12 CEST] <Wuzzy> oh so context and parameters are interchangable? nice
[10:55:35 CEST] <JEEB> not really, it's the view you get into the codec parameters from lavf
[10:56:50 CEST] <JEEB> so when you initialize your decoder based on the video/audio/subtitle format you can then copy various values from the lavf codecpar with the _to_context() thing
[10:57:27 CEST] <JEEB> and when initializing an output stream you can initialize the stream's values with from_context from your encoder context
[10:58:31 CEST] <Wuzzy> mhm
[11:00:55 CEST] <Wuzzy> ah
[11:01:10 CEST] <Wuzzy> from the example it looks like now i am supposed to get the context with avcodec_alloc_context3?
[11:01:38 CEST] <Wuzzy> but this would mean the deprecation comment is wrong... weird
[11:01:41 CEST] <JEEB> yes, the API user is not supposed to be taking an avcodeccontext from the libavformat
[11:01:55 CEST] <JEEB> Wuzzy: I think what got deprecated was the AVCodecContext in libavformat
[11:02:00 CEST] <JEEB> from the external view
[11:02:04 CEST] <JEEB> because people would reuse it
[11:02:24 CEST] <JEEB> while it's not the user's thing, it's a thing that belongs to libavformat
[11:04:00 CEST] <Wuzzy> ??? AVCocecContext is oocumented as a normal struct
[11:04:12 CEST] <Wuzzy> "main external API structure." ???!?!?!!
[11:04:18 CEST] <Wuzzy> https://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
[11:04:26 CEST] <JEEB> that's for AVCodec
[11:04:38 CEST] <JEEB> and of course you need an AVCodecContext for decoding or encoding
[11:05:31 CEST] <JEEB> now the issue that AVCodecParameters fixes is that the internal usage of libavcodec within libavformat was leaking to the API user :P
[11:05:35 CEST] <Wuzzy> i see
[11:05:51 CEST] <JEEB> and people indeed were re-using AVCodecContexts from libavformat streams
[11:06:33 CEST] <JEEB> even though those were owned by libavformat and the API user really didn't have any control on their life time
[11:06:42 CEST] <JEEB> or state
[11:09:12 CEST] <Wuzzy> btw why is there a "3" in "avcodec_alloc_context3"? :D
[11:09:28 CEST] <JEEB> because it's the third version of the function
[11:09:41 CEST] <JEEB> and while things could be renamed back, generally that isn't done
[11:10:08 CEST] <Wuzzy> interesting strategy to deprecate things
[11:10:21 CEST] <JEEB> yes, when your thing gets removed it is just removed
[11:11:07 CEST] <JEEB> but yea, the thing you noticed being deprecated is http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavformat/avformat.h;h=fdaffa5bf41b6ed83fa4f7acebcf04ed796296fd;hb=HEAD#l873
[11:11:12 CEST] <JEEB> which is specifically the one in AVStream .)
[11:11:35 CEST] <JEEB> AVCodecContext itself, which is part of the API (in avcodec)
[11:11:44 CEST] <JEEB> is not deprecated
[11:12:09 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/APIchanges;h=efe15ba4e00ea924a6ec3c154dd50414af3f30bd;hb=HEAD
[11:12:28 CEST] <Wuzzy> good to know
[11:12:32 CEST] <JEEB> also this is the document that is supposed to document the API changes
[11:12:55 CEST] <Wuzzy> ha. so replacing the ->codec with avcodec_allow_context3 seemed to work.
[11:13:05 CEST] <Wuzzy> but then i'd say the documentation is clearly wrong
[11:13:33 CEST] <Wuzzy> you cannot just replace all calls of (some AVCodec)->codec with (some AVCodec)->codecpar
[11:14:21 CEST] <JEEB> uhh no, that was specifically for the AVStream AVCodecContext and the comment is in libavformat/avformat.h
[11:15:01 CEST] <Wuzzy> um... oops?
[11:15:24 CEST] <JEEB> basically what it means that for whatever you're trying to use in this AVStream's AVCodecContext, you should be looking at the AVCodecParameters
[11:16:34 CEST] <Wuzzy> :'(
[11:16:45 CEST] <JEEB> http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/APIchanges;h=efe15ba4e00ea924a6ec3c154dd50414af3f30bd;hb=HEAD#l523
[11:17:11 CEST] <JEEB> also man
[11:17:19 CEST] <JEEB> the "new" decoding API is now two years old
[11:17:22 CEST] <JEEB> time sure flies :V
[11:17:35 CEST] <Wuzzy> :D
[11:18:00 CEST] <Wuzzy> yeah i am looking at code that is very old indeed
[11:18:21 CEST] <Wuzzy> thats why i decided to dust it off a little ;)
[11:18:40 CEST] <JEEB> but yes, the basic idea is that after you open your lavf context you have yer streams, and you can create your own little AVCodecContexts based on the codec id in codecpar
[11:19:00 CEST] <JEEB> and then feed it any of the data that libavformat had in the codecpar
[11:19:45 CEST] <JEEB> and then the "new" decoding API is documented in https://www.ffmpeg.org/doxygen/trunk/group__lavc__encdec.html
[11:19:57 CEST] <JEEB> I like the concept of separating feeding and receival
[11:20:19 CEST] <Wuzzy> huh? so if i just use the alloc3 function, it's not correct?
[11:22:44 CEST] <JEEB> it is correct, a new initialized avcodeccontext
[11:23:11 CEST] <Wuzzy> hmmm
[11:23:22 CEST] <JEEB> generally people do copy the properties from the avstream's codecpar, though. because some decoders require stuff to be set before they can decode properly
[11:23:24 CEST] <Wuzzy> first allow_context3, then avcodec_parameters_to_context 	?
[11:23:29 CEST] <JEEB> which is why all of the examples do that
[11:23:35 CEST] <JEEB> yes
[11:23:41 CEST] <JEEB> that's the general idea
[11:23:47 CEST] <Wuzzy> ah. now everything makes sense
[11:24:23 CEST] <JEEB> although with things where the decoder 100% initializes itself by itself like H.264 in MPEG-TS or AAC/AC-3/MPEG-1 Layer 2 audio
[11:24:28 CEST] <JEEB> that's not really needed
[11:25:00 CEST] <JEEB> but I guess it's good practice because then the decoder's parameters match what the lavf context figured out, if anything
[11:29:00 CEST] <Wuzzy> thanks for the help, i think i now know how to proceed
[11:35:17 CEST] <JEEB> np
[13:17:50 CEST] <squirrel> i want to convert all audio tracks in an mkv to mp3; i do ffmpeg -i file.mkv -acodec mp3 -vcodec copy out.mkv, this works but somehow the first audio track is gone
[13:18:37 CEST] <squirrel> why does this happen?
[15:31:53 CEST] <DHE> squirrel: you'll have to do some minimal scripting to identify files with multiple audio tracks and output to multiple .mp3 files. ffmpeg by itself without further information can only process 1 audio track to MP3
[15:33:14 CEST] <DHE> though it mainly just comes down to counting them and substituting a number into the commandline
[15:35:43 CEST] <squirrel> well i did (by memory) -map 0:a -map 0:v -map 0:s
[15:36:11 CEST] <squirrel> that seems to have added all tracks, including subtitles, to the output file
[15:36:37 CEST] <DHE> oh wow I misunderstood that. you just wanted an mp3 codec but still inside an mkv
[15:36:47 CEST] <DHE> I believe you can summarize as just -map 0
[15:40:07 CEST] <squirrel> oh
[15:40:11 CEST] <squirrel> nice thanks
[18:48:40 CEST] <^Neo> Good afternoon, if I have an AVPacket with data, but want to run it through another demuxer how would I go about doing that? Would I need to modify the avformatcontext's aviocontext?
[18:51:02 CEST] <DHE> ^Neo: like, recycle an existing AVPacket?
[18:52:17 CEST] <JEEB> ^Neo: you'd have to feed it to a custom avio thing
[18:52:54 CEST] <^Neo> I'm trying to figure out my ALSA IEC61937 stuff...
[18:53:22 CEST] <^Neo> so I want to take my AVPacket and instead of extracting it, feed it back through to the wavdemuxer
[18:54:13 CEST] <^Neo> so like ALSA AVFormatContext -> WAV AVFormatContext -> AC3/EAC3
[18:54:25 CEST] <JEEB> uhh
[18:54:38 CEST] <JEEB> can't you just use the spdif check function in the device thing?
[18:54:51 CEST] <JEEB> and do the same thing as what the wav demuxer is
[18:56:33 CEST] <^Neo> It's a static ff_* function I thought
[18:57:16 CEST] <JEEB> it wasn't static since another libavformat thing used it
[18:57:22 CEST] <JEEB> but I'd say it's a FFmpeg internal thing
[18:57:42 CEST] <JEEB> ff_* means it's internal to FFmpeg
[18:57:48 CEST] <JEEB> so you would have to improve the input thing you're using
[18:57:52 CEST] <JEEB> not use it in your API client
[18:58:23 CEST] <^Neo> right
[19:01:56 CEST] <^Neo> Right, so I thought taking the packet from the ALSA format and passing it into the WAV format would get me what I wanted
[19:03:02 CEST] <^Neo> Otherwise I'm just dumping the AVPacket data and parsing it myself
[19:03:36 CEST] <JEEB> well sure you can do it in a really awkward way with the wav thing
[19:03:49 CEST] <JEEB> I just thought it'd be more productive to fixup the audio input module
[19:04:12 CEST] <^Neo> Oh, like add it diretly into alsa?
[19:05:31 CEST] <^Neo> hmm
[19:05:33 CEST] <^Neo> I see
[19:51:45 CEST] <ntd> why was ffserver deprecated? is there some decent  alternative except vlc*?
[19:54:19 CEST] <DHE> nginx-rtmp gets a lot of popular use
[19:54:45 CEST] <DHE> I'm also a fan of using HLS or DASH, plus the static HTTP server of your choice (apache is fine)
[19:59:20 CEST] <ntd> ok, all i'm trying to do is have a local server pull *one* stream from each remote/wan source, then make it available locally to N clients
[19:59:43 CEST] <ntd> preferably with as little latency as possible and using -vcodec copy to reduce overhead
[20:00:33 CEST] <ntd> right now the N clients are pulling their own sep, streams of the same video, wasting bw
[20:11:25 CEST] <ntd> i'm using apache and sev clients won't do r*t*mp
[20:11:53 CEST] <ntd> https://serverfault.com/questions/844603/stream-rtsp-from-ipcam-without-re-encoding
[20:12:26 CEST] <ntd> looks like this guy had the same idea/problem, no answers. -fflags +gentps perhaps?
[20:14:26 CEST] <ntd> but i don't really wanna do HLS at all. rtsp over tcp554 in, rtsp over tcp554 out
[20:18:59 CEST] <ntd> DHE, any examples re the hls/dash stuff you mentioned?
[20:33:48 CEST] <killown> what does that mean Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 360x638, 306 kb/s): unspecified pixel format
[20:34:20 CEST] <killown> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[20:34:32 CEST] <killown> Too many packets buffered for output stream 0:1.
[20:34:34 CEST] <killown> [aac @ 0x55a09d33fc20] Qavg: 2637.849
[20:34:36 CEST] <killown> [aac @ 0x55a09d33fc20] 2 frames left in the queue on closing
[20:34:40 CEST] <killown> Conversion failed!
[20:36:31 CEST] <JEEB> killown: it couldn't initialize the decoder
[20:36:42 CEST] <JEEB> as in, the initialization data was not found
[20:39:57 CEST] <killown> JEEB, that's what I am doing http://wpbin.io/3p6vvz
[20:40:01 CEST] <killown> the video is playable
[21:02:28 CEST] <ntd> ok, i've got it working with ffmpeg -> ffserver http
[21:03:40 CEST] <ntd> but this can't possibly be a unique case, is there really nothing that'll do h264 over rtsp in -> passthrough h264 over rtsp out?
[22:01:50 CEST] <ntd> could this be done with netfilter/iptables? if local recorder A is pulling an rtsp stream from remote source B,C,D, can iptables be used to "mirror" the rx data at a local port, making the video stream available to local recorder E and F without having to pull sep streams?
[23:11:15 CEST] <killown> can anyone help me?
[23:11:16 CEST] <killown> JEEB, ?
[23:11:38 CEST] <JEEB> it's past midnight and I'm having my beer
[23:11:58 CEST] <JEEB> I'm only going to continue helping people on channels that I've already started and show middle finger to everything else. sorry.
[00:00:00 CEST] --- Mon Jul 23 2018


More information about the Ffmpeg-devel-irc mailing list