[Ffmpeg-devel-irc] ffmpeg.log.20191121

burek burek at teamnet.rs
Fri Nov 22 03:05:04 EET 2019


[00:00:33 CET] <Atlenohen> it ruins games as well, i mean it's the gaming community and industry that pushed for 60fps so you all got that, but 420 still a big issue
[00:00:44 CET] <Atlenohen> chroma subsampling*
[00:01:11 CET] <klaxa> isn't it mostly the color range and less the subsampling?
[00:01:16 CET] <friendofafriend> I'm looking around for an example of someone using ffmpeg to stream h264 in an mp4 container to Icecast.  mpeg4 in an mp4 container works fine, h264 in an MPEG-TS does too.
[00:01:24 CET] <furq> subsampling sucks for games that have text on screen
[00:01:26 CET] <furq> that's about it
[00:01:29 CET] <klaxa> ah true
[00:02:01 CET] <klaxa> well, it's not like having to encode sharp edges makes it any easier
[00:03:30 CET] <GuiToris> does anyone have some good advice how I could make the video a little bit smaller?
[00:03:44 CET] <GuiToris> is it because the source file is huge?
[00:03:49 CET] <klaxa> higher crf, slower preset
[00:03:49 CET] <GuiToris> ~100G
[00:03:49 CET] <Atlenohen> Ah it's not just about text, it makes all the textures look like crap, and makes distant objects invisible
[00:03:57 CET] <klaxa> that's about the knobs you can easily turn
[00:03:58 CET] <GuiToris> klaxa, 29 and veryslow currently
[00:04:12 CET] <GuiToris> +x265
[00:05:36 CET] <klaxa> also it depends on the information content of the input not it's size on disk
[00:05:42 CET] <furq> 500MB for 22 minutes isn't that much
[00:05:43 CET] <klaxa> what the output size will be
[00:05:46 CET] <klaxa> that too
[00:05:52 CET] <furq> i mean a film would be about 4x that length and 2GB would be very small
[00:05:53 CET] <klaxa> in 1080p no less
[00:06:25 CET] <klaxa> although i have seen files ranging in the 100-200 mb for 45 minutes shows in 720p, but there are a lot of artifacts
[00:06:47 CET] <furq> i've seen that without artifacts but it's been prefiltered to hell
[00:07:05 CET] <furq> GuiToris: maybe upload one of the pngs
[00:11:33 CET] <GuiToris> furq, https://www.dropbox.com/s/9yfjp7iynr34meb/rcseq_21040.png?dl=0
[00:12:07 CET] <GuiToris> 30000/1001 framerate
[00:13:58 CET] <GuiToris> x265 doesn't have much tune options so I didn't use any
[00:16:16 CET] <GuiToris> https://bpaste.net/raw/TCVDA
[00:21:59 CET] <GuiToris> furq, does it have the right file size? aren't there any more sane options?
[00:22:51 CET] <GuiToris> I've also created one with crf 28 and it's 553mb big
[00:23:41 CET] <GuiToris> since it takes ~40 hours to create a video, I haven't tried a lot
[00:42:22 CET] <petecouture> Is it possible within ffmpeg to achieve ultra low latency fragmented mp4s for near live streaming quickly? I found this project that uses the ffmpeg libraries but I would like to keep it within my ffmpeg stack. https://github.com/horgh/videostreamer
[00:42:51 CET] <petecouture> Goal being to achieve IP Camera to html5 playback.
[00:43:17 CET] <BtbN> Any kind of fragmented stuff is always gonna have delay, or look horrible because of ultra short gops
[00:43:40 CET] <petecouture> Ive been watching this demo for a while. http://umediaserver.net/umediaserver/demohtml5MSEplayer.html
[00:44:07 CET] <petecouture> Not notices any major quality issues
[00:44:23 CET] <BtbN> it's 99% static though
[00:44:33 CET] <petecouture> Most IP cameras are
[00:44:43 CET] <BtbN> and has at least several seconds long GOPs
[00:44:45 CET] <petecouture> this is a security camera situation not a motion picture distrubition
[00:45:35 CET] <BtbN> If less than 5~10 seconds of delay counts for your "ultra low latency", it's probably gonna be fine
[00:45:39 CET] <petecouture> How can you tell? Its using websockets to tunnel the data into a MSE
[00:46:03 CET] <petecouture> I could do that within HLS or something
[00:46:07 CET] <BtbN> Because I highly doubt a webcam encoder would generate short GOPs
[00:46:25 CET] <JEEB> BtbN: all of the low latency HLS/DASH things have moved to chunks that are fragments which don't start a GOP
[00:46:36 CET] <JEEB> that way the GOPs can still be longer for compression
[00:46:43 CET] <petecouture> ^ I was confused about him saying that.
[00:46:56 CET] <BtbN> and you just see garbled stuff if you join at the wrong time?
[00:47:15 CET] <BtbN> At least Twitch uses 2 second GOPs, which already hurt the quality quite a bit, and thus manage to get delay down to ~2 seconds.
[00:47:17 CET] <JEEB> or nothing, or start at the closest GOP
[00:47:29 CET] <petecouture> closest GOP was my experience
[00:47:33 CET] <BtbN> But they do use 2 second long segments.
[00:47:48 CET] <JEEB> Apple recommends 300ms chunks or so
[00:47:51 CET] <petecouture> frags can go down to 30ms chucks
[00:47:52 CET] <JEEB> with N second GOPs
[00:47:52 CET] <petecouture> I thought
[00:48:05 CET] <JEEB> fragmentation can be as often as you want
[00:48:16 CET] <JEEB> so it depends on your frame rate if you fragment at each video sample f.ex.
[00:48:17 CET] <BtbN> 30ms chunks sound stupid. Half of them will be empty
[00:48:32 CET] <petecouture> I dont think web MSE can handle lower then 30ms
[00:48:59 CET] <petecouture> JEEB: So no issues with fragmenting within FFMPEG?
[00:49:03 CET] <BtbN> MSE does not care about that. YOU do that stuff. In JavaScript
[00:49:09 CET] <BtbN> MSE just wants mp4
[00:49:31 CET] <JEEB> petecouture: nope? you can either send fragmentation signals through the API or utilize any of those fragmentation related APIs
[00:49:37 CET] <BtbN> petecouture, you will have to write custom software for all this for sure.
[00:49:44 CET] <BtbN> FFmpeg can't really output via a websocket
[00:49:53 CET] <JEEB> s/APIs/AVOptions/
[00:50:00 CET] <petecouture> oh no
[00:50:01 CET] <JEEB> websockets :s
[00:50:06 CET] <petecouture> I m not thinking of it handling websockets.
[00:50:12 CET] <petecouture> Ill pipe the frags my self
[00:50:21 CET] <JEEB> because XHR cannot into continuous connections without growing buffers?
[00:50:25 CET] <JEEB> which is sad :P
[00:50:56 CET] <petecouture> Just wanted to know if ffmpeg can do it. I havent found any reference to a low latency RTSP to frag MP4 within FFMPEG only its source libraries
[00:51:17 CET] <BtbN> you're always gonna get a few seconds of delay, so be aware of that.
[00:51:24 CET] <BtbN> there's just too many buffers involved
[00:51:29 CET] <petecouture> wat
[00:51:47 CET] <petecouture> My understaing is this solution is < 1 sec
[00:52:16 CET] <BtbN> If you really want near-realtime streaming, you will need custom software through the entire stack pretty much. FFmpeg is designed for reliability, not for zero latency. So there are buffers.
[00:52:30 CET] <petecouture> gotcha thank you
[00:52:30 CET] <BtbN> You can get it relatively low, but there is gonna be some buffers.
[00:52:35 CET] <JEEB> quite a bit of those can be optimized but definitely not by default
[00:52:45 CET] <JEEB> and if it's just remux and fragmentation with movenc
[00:52:53 CET] <petecouture> That explains the source libraries being used
[00:53:09 CET] <JEEB> well ffmpeg.c itself is also just an API client
[00:53:12 CET] <BtbN> But really, what are you expecting that you care about 2~3 seconds of delay?
[00:53:53 CET] <petecouture> I suppose I could push back on the < 1 requiremet
[00:53:58 CET] <petecouture> since its not live chat
[00:54:21 CET] <BtbN> The only thing that really needs this kind of zero latency is stuff like remote-gaming
[00:54:27 CET] <BtbN> and even that fails horribly 99% of the time
[00:54:53 CET] <petecouture> hah
[00:56:30 CET] <petecouture> cool product says 2-3 second delay is fine
[00:56:42 CET] <petecouture> Thank you guys!
[01:50:47 CET] <friendofafriend> That videostreamer project is pretty neat.  Couldn't you invoke ffmpeg from the command line to the same effect?
[01:53:02 CET] <klaxa> i think you would need a webserver
[01:54:29 CET] <klaxa> and some minimal html like in the repo
[01:54:59 CET] <friendofafriend> I've been trying to do basically the same thing. I've got some h264 stream and I'd like it in an mp4 container to play with HTML5.
[01:58:47 CET] <furq> friendofafriend: just use hls, it's less hassle
[02:01:54 CET] <friendofafriend> furq: Lag seems a little long.  I'm not worried about a few seconds, but 30 seems a bit much.
[02:03:07 CET] <TheAMM> Neat, people talking about a thing that I'm implementing as well
[02:03:34 CET] <TheAMM> Don't have much to comment on though, repo link was nice to see
[02:04:19 CET] <friendofafriend> I'm not really sure why trying the "go get" line doesn't work.  Wish there was a Makefile like normal.
[02:04:24 CET] <TheAMM> I've got extra goals though and will de/mux NUT too which will be f u n
[02:08:25 CET] <Retal> guys please remind me, i forgot how to list all available codecs for current build?
[02:08:51 CET] <klaxa> ffmpeg -codecs ?
[02:09:54 CET] <Retal> klaxa: yes!, thanks
[02:11:32 CET] <klaxa> for more detailed information about a single encoder use ffmpeg -help encoder=something, e.g. ffmpeg -help encoder=libx264
[02:26:31 CET] <void09> found this command to check a file for erorrs. fmpeg -v error -i "outputvideo.mkv" -f null - 2>test.log ; but how can I make it so it shows the timestamps at which the errors are found and not 0xabc..
[05:59:11 CET] <drazil100> Does anyone know how to add an additional audio track to a video WITHOUT replacing any existing audio tracks
[06:00:45 CET] <drazil100> specific use case is I have a movie that has 2 dubs and I would like to put both dubs in a single file
[06:01:52 CET] <drazil100> I have REPLACED audio before but I have never added additional tracks
[06:02:13 CET] <furq> drazil100: -map 0 -map 1:a
[06:03:43 CET] <drazil100> what if the file has like multiple different tracks already
[06:04:13 CET] <furq> https://www.ffmpeg.org/ffmpeg.html#Advanced-options
[06:04:29 CET] <drazil100> thank you
[06:04:55 CET] <drazil100> that helps a lot
[06:17:25 CET] <drazil100> Additional question.
[06:17:33 CET] <drazil100> how are these tracks named?
[06:17:49 CET] <drazil100> is it just based on the file name?
[06:18:13 CET] <void09> how can i find out the timestamp for this ? [h264 @ 0x557ff5a2da40] left block unavailable for requested intra4x4 mode -1
[06:21:48 CET] <furq> drazil100: how do you mean named
[06:22:04 CET] <furq> if you mean which file 0 refers to then it's the order you provided them in
[06:26:22 CET] <void09> furq please help, i've been googling for half an hour now :(
[09:41:16 CET] <drazil100> What does it mean if it says "Starting new cluster due to timestamp"
[09:42:41 CET] <drazil100> also most of the audio is muted
[09:42:53 CET] <drazil100> in the output
[09:44:33 CET] <drazil100> still trying to add an audio track to an existing video without overwriting existing audio
[09:44:56 CET] <drazil100> but it keeps ending up muted
[10:07:19 CET] <Weasel_> I am trying to get two different h264 rtsp streams from ip cam into mpeg-dash and hlv files for adaptive streaming, it is kind of working, but first input stream is always damaged. Is this supposed to work or am I trying the impossible?
[10:08:09 CET] <Weasel_> hlv=hls
[10:12:50 CET] <BeerLover> I'm trying to convert a song into hls. I am using this: https://dpaste.de/VMMQ. What is wrong with the command?
[10:25:25 CET] <cehoyos> What could be wrong with it?
[10:25:31 CET] <cehoyos> (Complete, uncut console output missing.)
[10:27:22 CET] <BeerLover> @cehoyos https://dpaste.de/bFRv
[10:28:31 CET] <Weasel_> BeerLover: it sayt that dir 320k does not exist
[10:28:36 CET] <BeerLover> i want to create 4 different bitrate hls playlist with master playlist
[10:30:16 CET] <BeerLover> https://dpaste.de/WvYV this works but it creates 0/segment.. 1/segment... 2/segment... etc
[10:30:36 CET] <BeerLover> Can i specify such that I get 320k/segment... 128k/segment... etc
[10:30:42 CET] <BeerLover> folders based on bitrates
[10:30:54 CET] <BeerLover> and master with those files
[10:32:07 CET] <Weasel_> BeerLover: and you have created those dirs 320k, 128k, 64k and 32k?
[10:32:21 CET] <BeerLover> no
[10:32:43 CET] <Weasel_> why not?
[10:33:26 CET] <BeerLover> If i use var_stream_map, it creates the directories if i give "%v/segment%d.ts" as hls_segment_filename
[10:33:54 CET] <BeerLover> is there any way i can create 320k/ 128k/ etc
[10:36:22 CET] <Weasel_> well, I don't know but in my experiments ffmpeg did not create directories, I had to use mkdir :)
[10:36:29 CET] <BeerLover> k
[10:36:34 CET] <BeerLover> one more question
[10:36:49 CET] <BeerLover> The bitrate I specify and the bitrate in the output is a little off
[10:38:01 CET] <BeerLover> I specified 128k and it's 143k
[10:38:59 CET] <BeerLover> Codec AVOption b (set bitrate (in bits/s)) specified for output file #1 (128k/index.m3u8) has not been used for any stream. The most likely reason is either w
[10:39:00 CET] <BeerLover> rong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
[10:47:25 CET] <Weasel_> I think you should propably do some encoding with those bitrates so they would be put into those files, your source seems to be ~130k and maybe it was copied straight into your 128k output
[10:54:42 CET] <BeerLover> Weasel_ here is the output log for a file with 1411kb/s
[10:55:57 CET] <BeerLover> 2 issues that i have are: 1) I need folder structure with <bitrate>/segment. If i do it manually, I have to edit the master.m3u8 also manually and update references. 2) the bitrates are different
[11:11:57 CET] <BeerLover> Solved the 1st issue
[11:12:36 CET] <BeerLover> used names in variant streams
[11:12:45 CET] <BeerLover> but the bitrate problem persists
[11:13:05 CET] <BeerLover> I tried using -c:a aac -b:a:0 320k
[11:13:30 CET] <BeerLover> but the output is 287k
[11:14:44 CET] <cehoyos> Try to encode your input file standalone to find out what the issu eis
[11:14:56 CET] <cehoyos> (it could be the silent input cannot reach a high bitrate)
[11:30:47 CET] <pagios> good morning community, i am doing a mp4 extraction using index.m3u8 -live_start_index -10 -t 30 -c copy -bsf:a aac_adtstoasc test.mp4 it works well but it is SLOW, it takes 30seconds to give the output, is there a way to make it faster to get the mp4? Thank you
[11:46:26 CET] <pagios> will -preset ultrafast apply?
[11:47:05 CET] <pagios> i see the problem is in opening the .ts files
[11:49:29 CET] <squ> doesn't apply with copy
[11:50:07 CET] <Weasel_> pagios: -probesize 32 ?
[11:50:11 CET] <cehoyos> pagios: Complete, uncut console output missing.
[11:54:32 CET] <Weasel_> can't be, that -probesize 32 solved my corrupt stream problem, I had previously used if with ffplay only
[12:22:24 CET] <pagios> Weasel_, same speed
[12:27:04 CET] <pagios> cehoyos, https://pastebin.com/krXZ7Ya0 Weasel_  each of these lines takes around 1second to finish, that is what is making it slow, any way to read faster?
[12:47:04 CET] <cehoyos> Sorry for being unclear above: I am only able to parse command line including complete, uncut console output, not excerpts.
[13:05:13 CET] <Anderssen> Another topic: what would be the correct output format for dvb-teletext if I want plain text? Apparently "txt" is not a valid option, but now I've tried this line: ffmpeg -txt_format text -txt_page 100 -i Star\ Trek.m2t -map 0:0 -c:s mov_text  -t 60 -y -v verbose out.mp4 . Indeed it seems to output the teletext page (see https://pastebin.com/Da8juWE4 ), but obviously .mp4 is the wrong format.
[13:08:39 CET] <c_14> you probably want srt or ass
[13:08:44 CET] <c_14> those are text with timing/styling info
[13:09:59 CET] <c_14> though my ffmpeg is listing a "text" encoder for subtitles
[13:10:05 CET] <Anderssen> c_14: I think srt is limited to subtitles; I'm rather interested in teletext in general.
[13:14:07 CET] <JEEB> if you use the ASS output it will output as text subtitles *all* of the pages that are decoded if you tell the decoder to decode all pages
[13:16:36 CET] <BeerLover> @cehoyos Can you elaborate?
[13:19:17 CET] <Anderssen> JEEB: would I do that with "-f ass" ? If I add that to the line above I get the error "Exactly one ASS/SSA stream is needed". However, do you mean when I output "as text subtitles" it would treat "regular" (i.e. non subtitle pages) as subtitles? would it change the output format or so?
[13:20:53 CET] <Anderssen> I'm not sure if I'm making much sense: I would like to preserve all teletext pages (if possible as plain text)
[13:27:21 CET] <Anderssen> c_14, JEEB, I think I just composed the line wrong. I left out "-c:s mov_text" now and I just chose "out.srt" as outputfile, and it did the output as plaint text. thanks!
[13:29:16 CET] <cehoyos> BeerLover: You provided a complicated command line that produces several outputs - if you have a problem with audio encoding, test with one input and one output file and see if the issue is reproducible.
[13:29:43 CET] <BeerLover> k
[13:29:45 CET] <cehoyos> Anderssen: You cannot preserve teletext using subtitles
[13:30:10 CET] <Anderssen> you mean using srt? it ouputs plain text though
[13:30:42 CET] <cehoyos> If you want to preserve teletext (why??), put the teletext in a transport stream, I suspect this is the only container that accepts it.
[13:33:14 CET] <JEEB> you can either get it out as bitmap or ass, which might be "good enough" for your use case
[13:33:28 CET] <BeerLover> @cehoyos I tried with one map and 1 var stream
[13:33:36 CET] <BeerLover> still bitrate is less in the output
[13:33:44 CET] <JEEB> but of course if you want the actual teletext with *everything* as-is
[13:33:49 CET] <JEEB> then you need the teletext stream
[13:33:51 CET] <JEEB> as-is
[13:39:31 CET] <Anderssen> cehoyos, JEEB: that's a possibility I thought about, but the teletext stream is somewhat repititive or redundand; it's more comfortable if I have one text file (or maybe an html file) where I can view the whole thing.
[13:39:58 CET] <cehoyos> Then I believe you have already found a solution or not?
[13:40:07 CET] <Anderssen> ceyohos: as to why: that's a long story; I like the information in there ;)
[13:41:07 CET] <Anderssen> cehoyos: yes, unless of course there is a way to output in html or something (like alevt for example which produces html files which look like the original; that is with all the colors and so on)
[13:41:50 CET] <cehoyos> FFmpeg does not support this and once you used FFmpeg to create a text (subtitle) file, it is not possible anymore to create the html (or so) file
[13:42:12 CET] <pagios> good morning community resending my question , i am doing a mp4 extraction using index.m3u8 -live_start_index -10 -t 30 -c copy -bsf:a aac_adtstoasc test.mp4 it works well but it is SLOW, it takes 30seconds to give the output, is there a way to make it faster to get the mp4? Thank you
[13:42:48 CET] <pagios> , https://pastebin.com/krXZ7Ya0 Weasel_  each of these lines takes around 1second to finish, that is what is making it slow, any way to read faster?
[13:43:24 CET] <Anderssen> cehoyos, I'm also happy with it being plain text; preserving the colors etc. would be icing on the cake so to speak.
[13:48:57 CET] <JEEB> Anderssen: try the txt_format AVOption with "ass" as the value
[13:49:09 CET] <JEEB> and save as .ass
[13:49:29 CET] <JEEB> that should contain text, colors and positioning
[13:49:38 CET] <JEEB> not perfect, of course
[13:49:54 CET] <JEEB> you can then preview the ass with mpv or aegisub or something if it's "good enough"
[14:13:00 CET] <pagios> JEEB,  any idea?
[14:13:10 CET] <pagios> my issue is the segments being generated are small
[14:13:13 CET] <pagios> and the playlist is small
[14:13:19 CET] <pagios> segment size is 1sec, playlist is 4sec
[14:20:20 CET] <Anderssen> JEEB, thanks; can't get it to run right now, have to try later, thanks again!
[14:30:49 CET] <perseiver> Hello, I am using av_parse_init() to initialize the parser for AMR-NB codec.  But code is not able to find the parser. I have compiled the ffmpeg with amr-nb, amrwb support, what else I have to do?
[14:51:11 CET] <Weasel_> pagios: https://ffmpeg.org/ffmpeg-formats.html#toc-hls-2 hls_time & hls_list_size
[14:51:52 CET] <perseiver> how ffplay find AMR NB codec and parser
[14:57:50 CET] <durandal_1707> easy
[14:59:23 CET] <ritsuka> perseiver: probably there isn't a parser, just a demuxer and a decoder.
[15:00:28 CET] <perseiver> ok, why is this so?  Because it is third party library and is compiled as optional?
[15:00:50 CET] <ritsuka> maybe because it doesn't need a parser
[15:02:09 CET] <perseiver> thanks ritsuka: I will have to find another way to use AMR. Atleast I want to read AMR file and stream it using ffmpeg library
[15:02:12 CET] <durandal_1707> amr codecs do not have/use parsers
[15:02:31 CET] <durandal_1707> perseiver: you never provided code you use
[15:03:07 CET] <ritsuka> what's wrong with the demuxer?
[15:03:44 CET] <perseiver> Actually I am trying to edit decode_audio.c that comes in example directory of ffmpeg.  There is code that parse AV_CODEC_ID_MP2, I was try to change it like below
[15:03:59 CET] <perseiver>   codec = avcodec_find_decoder(AV_CODEC_ID_MP2);
[15:03:59 CET] <perseiver>   if(!codec) {
[15:03:59 CET] <perseiver>     fprintf(stderr, "Codec not found\n");
[15:03:59 CET] <perseiver>     exit(1);
[15:03:59 CET] <perseiver>   }
[15:04:00 CET] <perseiver>   parser = av_parser_init(codec->id);
[15:04:03 CET] <perseiver>   if(!parser){
[15:04:07 CET] <perseiver>     fprintf(stderr, "Parser not found\n");
[15:04:23 CET] <perseiver> But after compile I am always getting output as "parser not found"
[15:04:36 CET] <perseiver> but not for other codec
[15:05:10 CET] <durandal_1707> perseiver: because amr codecs does not need parsers
[15:05:36 CET] <durandal_1707> you can not simply copy and paste code you do  not understand
[15:07:41 CET] <perseiver> ok. here is my code. what I am trying https://pastebin.com/3Qd7reDQ
[15:09:39 CET] <durandal_1707> this is not trivial task, find someone who know how to code
[15:12:24 CET] <perseiver> durandal_1707: ok,I need simple guide, how I will use AMRNB, AMRWB and libx264 codec which I have compiled with ffmpeg. As its third party library and first time I know that some codec doesnot require parser.  There must be a way to use the current build.  As its really interesting to know the ffplay command is successfully able to find the AMR codec and stream it
[15:12:57 CET] <perseiver> If anyone can guide me, it will be very useful to me.
[15:18:56 CET] <alone-x> hello, is it possible add black border only to the right corner?
[15:18:57 CET] <alone-x> ffmpeg -i 123.mkv -filter_complex "scale=7387:900, pad=8580:900:598:0" 124.mkv
[15:19:17 CET] <alone-x> i need add border from 7387 to 8580..
[15:19:50 CET] <alone-x> this line it's ok - but i need only to the right.... ;(
[15:36:26 CET] <alone-x> .
[15:42:01 CET] <pagios> Weasel_, 8 -live_start_index -10 -t 10 -c copy -bsf:a aac_adtstoasc test.mp4 <--- if it runs on timeX, it should extract timeX-10sec up to timeX , in my case it is extracting timeX up to timeX+10 ,
[15:42:56 CET] <pagios> hls fragment size is 1s and hls_playlist_length is 40sec
[16:08:27 CET] <void09> I have 2 video streams that come from the exact same source, but have errors/corruption in 2 different places. They have both been converted to mkv, one by ffmpeg, the other with mkvtoolnix. what is the raw-est format i can use to convert them to (audio/video) so I get the same byte stream so i can try to stich the good bytes from the other one in the places where they are corrupt ?
[16:08:58 CET] <void09> I assume the mkv files to differ in the stream, from ffmpeg and mkvtoolnix
[16:19:22 CET] <void09> hmm, but if i convert both from ffmpeg, i should get the same stream though, no matter the container i used, right?
[16:38:03 CET] <Weasel_> pagios: ok, you were on receiving end... I tried -live_start_index option with hls and it did not seem to have any effect. couldn't try with dash as my ffmpeg seems to miss demuxer for it
[16:38:37 CET] <pagios> Weasel_, yea on receiving end
[16:38:42 CET] <pagios> i need to client on the viewer
[16:38:46 CET] <pagios> clip*
[16:44:36 CET] <Weasel_> one could buffer those 10 seconds with gstreamer queue and then save those when triggered. i dont know enough about ffmpeg
[16:47:53 CET] <void09> it needs to be a perfect merge
[17:45:54 CET] <Weasel_> void09: I would copy streams out of containers and compare those
[17:48:05 CET] <void09> how :/
[17:48:40 CET] <Weasel_> ffmpeg -i file.mkv -c copy stream.h264
[17:49:05 CET] <Weasel_> output name depends of the codecs
[17:50:56 CET] <cehoyos> For most codecs, you can use -f rawvideo to avoid the naming issue
[17:51:19 CET] <cehoyos> (Doesn't work for vorbis, speex, theora)
[18:35:49 CET] <alone-x> hello, can i add only one black border to right side of file?
[18:36:32 CET] <durandal_1707> alone-x: yes you can
[18:36:41 CET] <alone-x> ffmpeg -i 123.mkv -filter_complex "scale=7387:900, pad=8580:900:598:0" 124.mkv
[18:36:55 CET] <durandal_1707> read carefully documentation
[18:37:03 CET] <durandal_1707> nobody gonna do your work
[18:37:18 CET] <alone-x> ok Durandal. thank you
[18:47:04 CET] <void09> any good binary files visualizing/compare/merge etc gui tool ?
[18:48:19 CET] <void09> need to strip bytes 5-10 from a file
[18:49:12 CET] <alone-x> VBinDiff-3.0_beta5.zip
[18:49:26 CET] <alone-x> and fc (under windows)
[19:03:45 CET] <jemius> I have awkward delays between audio and video of maybe 100-500 ms after filtering some frequencies with audacity. Any suggestions how I might find out where the problem is? I did not cut anything :(
[19:06:56 CET] <durandal_1707> phase issues, audacity is far from perfect tool
[19:08:26 CET] <jemius> durandal_1707, so it has nothing to do with the stream's exact length being changed?
[19:12:44 CET] <alone-x> ffmpeg -i 123.mkv -filter_complex "pad=8580:900:7387:0:violet" 124.mkv
[19:12:53 CET] <alone-x> Durandal why it's not ok?
[19:13:12 CET] <alone-x> it's adding but from the left corner
[19:13:15 CET] <durandal_1707> alone-x: i can not help you, as you never provided input file
[19:13:47 CET] <alone-x> 7387*900
[19:13:59 CET] <alone-x> it's all about file
[19:14:19 CET] <durandal_1707> alone-x: set width and height of output video
[19:14:33 CET] <durandal_1707> and x/y where you need to set input video
[19:14:49 CET] <durandal_1707> averything outside will be black
[19:15:39 CET] <durandal_1707> jemius: i can not guess, because i do not know what have been done with _filtering some frequencies with audacity_
[19:15:39 CET] <alone-x> i need 8580 from 7387 by adding black to the right corner
[19:16:12 CET] <kepstin> alone-x: then adjust the parameters to the pad filter to place the video over on the left.
[19:17:21 CET] <alone-x> well, there is no other parameters : w h x y
[19:17:59 CET] <alone-x> Add paddings with the color "violet" to the input video. The output video size is 640x480, and the top-left corner of the input video is placed at column 0, row 40
[19:18:00 CET] <alone-x> pad=640:480:0:40:violet
[19:18:28 CET] <kepstin> yes, the parameters you've used place the video over on the right side, so the left side has black
[19:18:50 CET] <kepstin> so change them to put the video on the left side, then you'll have black on the right
[19:19:23 CET] <alone-x> kepstin, how can i add to 7387 ->8580?
[19:19:31 CET] <alone-x> at the right corner?
[19:20:27 CET] <durandal_1707> how much padding you need from left and from right?
[19:20:43 CET] <kepstin> the parameters are, in order, width of the output video, height of the output video, how far from the left side the video will be placed in the area, and how far from the top side the video will be placed in the area
[19:20:48 CET] <alone-x> 8580-7387=
[19:21:01 CET] <alone-x> 1193
[19:21:02 CET] <kepstin> so if you don't want black on the left, then put the video over on the left - i.e. set x=0
[19:21:57 CET] <alone-x> kepstin i dont understand how can i move adding pad... from left corner to right
[19:22:22 CET] <kepstin> this is super simple. do you need me to draw you a picture?
[19:22:23 CET] <alone-x> all i need just put 1193*900 to the right top corner of my 7387*900 video
[19:22:34 CET] <kepstin> the positions are relative to top left
[19:22:51 CET] <kepstin> and the positions set the position of the video within the area, not the position of the padding
[19:22:54 CET] <alone-x> well, i now left top = 0 0
[19:23:05 CET] <alone-x> bottom right = 7387 900
[19:23:20 CET] <alone-x> it's a super simple. but i cant do with filter
[19:23:42 CET] <jemius> Are there any other recommandable tools or software libraries providing notch and lowpass filters? ffmpeg's filters itself don't suit my needs
[19:24:26 CET] <kepstin> alone-x: it's super simple, the filter can do it just fine, you're just not understanding how it works
[19:24:51 CET] <alone-x> ok, where is parameter of shift? w:h:x:y right?
[19:25:08 CET] <durandal_1707> jemius: really? what are you attempting to do?
[19:25:36 CET] <alone-x> kestin thank you - i will try re-read it
[19:25:40 CET] <kepstin> so you have an input video that's 7387×900  right? and you want the output video to be 8580×900? And you want to do that by adding a black bar to the right side?
[19:25:43 CET] <durandal_1707> ffmpeg have fine notch and lowpass filter, if you  need anything higher order use aiir or afir
[19:25:47 CET] <alone-x> yes
[19:25:52 CET] <alone-x> 7387*900
[19:26:03 CET] <kepstin> alone-x: then you want to use pad=8580:900:0:0
[19:26:05 CET] <alone-x> all i need 8580 * 900
[19:26:27 CET] <alone-x> kepstin in that case will be at the left corner
[19:26:35 CET] <kepstin> yes, that's what you asked for
[19:26:35 CET] <alone-x> but i need at the right
[19:26:40 CET] <kepstin> you want the video to be on the left
[19:26:44 CET] <alone-x> no! i need at the RIGHT!
[19:26:46 CET] <kepstin> that way the black bar will be on the right
[19:27:06 CET] <alone-x> i need add it to the right
[19:27:09 CET] <alone-x> is it possible?
[19:27:15 CET] <jemius> durandal_1707, the maximum number of poles is limited to 2 (lowpass), what means -40dB/decade. That's not enough
[19:27:21 CET] <kepstin> do you want the black bar on the left or the right?
[19:27:30 CET] <alone-x> to the right
[19:27:33 CET] <alone-x> black bar yes
[19:27:38 CET] <kepstin> then i told you how to do that
[19:27:47 CET] <alone-x> 1193*900 the w and h of this bar
[19:27:52 CET] <alone-x> how?
[19:28:08 CET] <alone-x> pad=8580:900:0:0&
[19:28:08 CET] <alone-x> ?
[19:28:14 CET] <durandal_1707> jemius: if you need IIR filter, then cascading biquad/lowpass(with different q) filter multiple times is all what you need
[19:28:20 CET] <kepstin> alone-x: <kepstin> and the positions set the position of the video within the area, not the position of the padding
[19:28:42 CET] <kepstin> alone-x: the black bar fills the space where there is no video
[19:29:09 CET] <alone-x> kepstin thank you a lot
[19:29:12 CET] <alone-x> if will try
[19:29:41 CET] <jemius> durandal_1707, I suspected that if there's no switch to do it automatically, cascading manually would do harm
[19:29:49 CET] <alone-x> ffmpeg -i 123.mkv -filter_complex "pad=8580:900:0:0:violet" 124.mkv
[19:29:56 CET] <alone-x> not it's wrong
[19:34:47 CET] <durandal_1707> jemius: https://github.com/nlphacker/Audacity/blob/570c5651c5934469c18dad25db03f05076f91225/nyquist/dspprims.lsp#L538
[19:35:20 CET] <durandal_1707> butterworth lowpass8 is cascade of 4 lowpass2 filters
[19:35:30 CET] <durandal_1707> and can be done with ffmpeg lowpass filter
[19:36:17 CET] <kepstin> alone-x: i drew a picture for you: https://www.kepstin.ca/dump/padfilter.png
[19:36:19 CET] <durandal_1707> alternatively if you know (there are tools that can generate it) coefficients for notch filter, you can use aiir filter
[19:37:10 CET] <durandal_1707> kepstin: that picture should be part of our wiki
[19:37:13 CET] <jemius> I do know some basics about digital data processing, but maybe not enough to do complex stuff. Anyways, it seems it has to be done manually
[19:37:33 CET] <kepstin> i assume someone has made that picture before but better.
[19:37:50 CET] <kepstin> but hey, i have a pen tablet that i felt like playing with
[19:47:38 CET] <durandal_1707> jemius: something like this: lowpass=f=500:w=0.57622191,lowpass=f=500:w=0.66045510,lowpass=f=500:w=0.94276399,lowpass=f=500:w=2.57900101  f=500 change to some other value
[19:49:46 CET] <jemius> durandal_1707, it seems one also would have to design his own notch filter
[19:53:45 CET] <durandal_1707> jemius: not if you know its coefficients, need some background is dsp processing and you can use aiir filter
[20:05:46 CET] <jemius> durandal_1707, hm, thx, I might have to read some literature first. I've looked into the filters, and they have a ton of parameters I don't fully understand
[20:18:27 CET] <durandal_1707> jemius: https://www.micromodeler.com/dsp/
[20:20:11 CET] <durandal_1707> for high order display of coefficients you need to pay, but there is catch, and you can use aiir just fine, as it support zp (zero-poles) format
[20:22:03 CET] <alone-x> kepstin thank u
[20:23:40 CET] <alone-x> Secure Connection Failed
[20:25:09 CET] <alone-x> via TOR it's ok
[20:25:15 CET] <kepstin> my site's ssl is fine, either your os/browser are out of date, or you live in a country where it's blocked for some reason :/
[20:25:43 CET] <alone-x> kepstin, no other ssl it's working ;(
[20:25:48 CET] <alone-x> i dont know nevermind
[20:27:04 CET] <jemius> durandal_1707, uff, thx
[20:28:26 CET] <alone-x> kepstin, still dont understand ;)
[20:28:49 CET] <alone-x> all i need just add some bar with width
[20:29:12 CET] <kepstin> yes, and i told you how to do it, gave you the exact parameters to use even.
[20:29:57 CET] <kepstin> remember that the pad filter doesn't add bars - instead it works by making a new empty frame the size you asked for, and then it puts the input video somewhere in that frame
[20:30:03 CET] <kepstin> so the space leftover becomes bars
[20:30:54 CET] <kepstin> so if you want a bar on the right, you tell it to make an output which is wider than the input, and then put the input video on the left side.
[20:31:00 CET] <kepstin> then the leftover space on the right becomes a bar.
[20:34:54 CET] <alone-x> <kepstin> alone-x: then you want to use pad=8580:900:0:0
[20:34:56 CET] <alone-x> this one?
[20:40:53 CET] <alone-x> thnank you kepstin, it's seems to me i know the reason
[20:41:03 CET] <alone-x> why it seems to me not working
[20:41:47 CET] <alone-x> i made it with remote admin and was some overlay transferring bug
[20:41:49 CET] <alone-x> !!!
[20:41:54 CET] <alone-x> locally it's fully ok
[20:46:49 CET] <FlipFlops2001> When using the native AAC codec, what is the dB/oct of the -cutoff parameter?
[20:52:28 CET] <FlipFlops2001> Note: AAC is a very good lossy codec, however, cutoff, PNS, TNS are not hi-fi, nor do they save a significant amount of bytes. Disabling them actually gives me slightly smaller, better sounding results.
[20:53:40 CET] <kepstin> at low bitrates adjusting the cutoff is a tradeoff between muddy sound vs. screechy artifacts, depends on content which is worse :/
[20:54:20 CET] <kepstin> that said, i normally use fdk encoder instead of ffmpeg builtin.
[20:56:30 CET] <kepstin> interesting question the lowpass - i have no idea what type of filter is used. i wonder if it might actually be done in the frequency domain (after the mdct?)
[20:58:04 CET] <FlipFlops2001> @kepstin: Granted, not interested in bitrates that low, FDK encoder; excellent, but cannot encode at 96k sample rate. The trade off, I feel is worth it. I perform all my audio manipulations (eq, gain, etc.) @ 384kHz rf64/WAV then transcode them back to 96k AAC.
[20:58:43 CET] <kepstin> there's no point in 96kHz audio for humans to listen to, and if you're archiving you shouldn't be using a lossy codec :/
[21:00:11 CET] <kepstin> using 44.1 or 48kHz should allow you to use substantially lower bitrate for the same quality without having to worry about disabling the lowpass filter :/
[21:02:02 CET] <kepstin> you want to use a lowpass filter of some sort anyways to remove ultrasonics incase the player has issues with aliasing or the amplifier has distortion
[21:02:16 CET] <FlipFlops2001> @kepstin: In a recent listning test, sampling rate had to be in the upper 400kHz before humans could not tell the difference between analog and digital. I agree and I've been an audio eng. for 30+ years.
[21:02:38 CET] <kepstin> (i'd expect more good amp setups to already have a lowpass filter builtin to avoid distortion issues)
[21:03:10 CET] <kepstin> are you sure you weren't looking at a dsd test or something? comparing sample rates for dsd and pcm doesn't make sense :)
[21:06:16 CET] <FlipFlops2001> @kepstin: Where are you getting this info from? The best audio amplifiers are those with a high slew-rate and 100kHz response or better. Humans can't hear above 20k (in most cases 18k) but the overtones produced with responses 20k+ is what enables the perception of transparency.
[21:06:50 CET] <kepstin> i assume when they were doing this test with 400kHz sample rates they added a bunch of extra ultrasonic drivers with their own crossovers to reproduce the sounds that humans can't hear?
[21:07:26 CET] <kepstin> the ultrasonics don't do anything unless your system is introducing distortion that causes them to become audible :/
[21:07:51 CET] <kepstin> (and if that's the case, you could pre-apply that distortion if you really wanted to)
[21:08:52 CET] <kepstin> and also, since i haven't seen this, i hope this listening test was done double-blind, with review to check that people aren't doing something like listening for differences in relay clicks ;)
[21:08:59 CET] <kepstin> (a real problem from a past listening test)
[21:09:27 CET] <kepstin> otherwise it's completely meaningless.
[21:10:34 CET] <FlipFlops2001> @kepstin: This is simple: If a crash cymbal produces frequencies @ 15kHz+, and your response is limited to 20kHz, then the 1st overtone of these frequencies are 0, since the 1st overtone of 15kHz is 30kHz. Therefore anything at 15kHz is a sine-wave. Does that make sense?
[21:11:28 CET] <kepstin> crash symbols have lower frequency components than that...
[21:11:48 CET] <kepstin> and also aren't pure single tones
[21:12:17 CET] <FlipFlops2001> @kepstin: I know that, but they also have upper frequencies.
[21:12:32 CET] <kepstin> which people can't hear, and have no effect.
[21:16:07 CET] <FlipFlops2001> @kepstin: We'll never agree on this, but running my live mixing console @ a sampling rate of 96kHz, rather than 48kHz while mixing The O'Jays at an arena, sure sounds better. This has nothing to do with the limitations of the human ear. It has to do with what frequencies above 20k does to the lower audible range.
[21:16:52 CET] <kepstin> doing intermediate mixing at a higher sample rate can make improvements if doing multiple filters because it gives additional room for filter overshoots or whatnot
[21:16:55 CET] <kepstin> i'm not disputing that
[21:17:05 CET] <kepstin> has nothing to do with the reproduction of the final sound tho
[21:18:03 CET] <FlipFlops2001> @kepstin: Yes it does. Q: Have you ever done analog recording?
[21:19:38 CET] <FlipFlops2001> @kepstin: Ever listened to a "Crusaders" analog recording?
[21:20:01 CET] <kepstin> I haven't, but my impression was that most analog recording equipment has significantly less bandwidth than modern digital stuff and more distortion.
[21:20:26 CET] <kepstin> although you could do things like run tape at half-speed to help it out a bit
[21:25:35 CET] <FlipFlops2001> @kepstin: Really? Modern digital stuff? 48kHz cuts off (and I mean done) at 24kHz. Running tape @ 1/2 speed lowers your head-bump frequency and reduces upper freq resp/extends lower freq resp. (Tape speeds: 30 ips or 15 ips)
[21:27:37 CET] <kepstin> i mean, you can record at higher sample rates digitally if you really want (with much less noise than the equivalent tape speeds), assuming they're within the range your microphone captures, but what's the point unless you're doing scientific research involving ultrasonics?
[21:28:32 CET] <furq> damn kids these days with their zeroes and ones
[21:28:49 CET] <FlipFlops2001> @kepstin: By careful biasing and recording eq, I was able to accomplish response on a Studer 2trk 1/4in machine flat beyond 20kHz and significant results @ 30kHz. Live to 2trk recordings sounded great.
[21:29:21 CET] <FlipFlops2001> @furq: I'm 52 years old, how 'bout U?!
[21:29:47 CET] <kepstin> i'm sure it did, but it would have also sounded great digitally recorded at 48kHz, unless part of what made it "sound great" was distortion added in the recording/mixing which you didn't reproduce in the digital mix.
[21:30:58 CET] <FlipFlops2001> I learned with analog, vinyl, noise-floor restrictions. How 'bout U?
[21:31:28 CET] <TheAMM> Here's a mellower topic: what properties can vary in a h.264 stream while still allowing the streams to be concatenated safely and without any expected ill effects?
[21:31:33 CET] <kepstin> fun fact: apparently bit depth in digital recordings behaves very similarly to noise floor restrictions in analog :)
[21:32:08 CET] <TheAMM> I'm going to be creating the files myself with ffmpeg so no random sources - resolution is a given and subsampling, but bitrate/crf? Timing somehow?
[21:32:27 CET] <kepstin> TheAMM: hmm, i know x264 has a special option to make files concatable
[21:32:34 CET] <kepstin> so it should be as simple as using that.
[21:32:34 CET] <furq> TheAMM: pretty sure it's just profile/level stuff
[21:32:59 CET] <FlipFlops2001> @kepstin: You hear differently than I. Let's leave it at that. There are a lot of other people in my industry that agree with me. Who have you worked with and what studios have you worked in?
[21:33:02 CET] <furq> so frame size, framerate, number of refs
[21:33:27 CET] <kepstin> TheAMM: --stitchable in the x264 cli docs, i presume it can be set with -x264opts
[21:33:40 CET] <TheAMM> Yeah, looking at that
[21:33:46 CET] <furq> maybe also bframes, some players will throw a fit if you have more bframes than set in the initial sps
[21:33:59 CET] <furq> pretty sure some players are fine with it though
[21:34:31 CET] <kepstin> FlipFlops2001: i'm going by reviewed studies of human hearing range combined with double-blind testing results that i've looked at. "people agree with me" doesn't mean anything without the testing to back it up :)
[21:35:03 CET] <FlipFlops2001> @kepstin: Stop looking at studies and start listning.
[21:35:11 CET] <TheAMM> I agree with kepstin, qed he wins
[21:36:09 CET] Action: kepstin doesn't have any playback equipment rated with frequency responses sufficiently over 20kHz to do any meaningful testing about high sample rate playback.
[21:36:34 CET] <TD-Linux> luckily we have this video https://xiph.org/video/vid2.shtml
[21:36:35 CET] <kepstin> other than whether playing ultrasonics through my equipment causes audible distortion, i suppose. i could test that.
[21:36:53 CET] <furq> TD-Linux: thank you
[21:36:59 CET] <furq> i've been sat here waiting for a xiph guy to chime in
[21:37:12 CET] <kepstin> that said, there's probably enough lowpass filters in my playback chain that the ultrasonics wouldn't be reproduced at all, and i have no way to check :)
[21:42:03 CET] <FlipFlops2001> @kepstin: There is IM distortion and HM distortion. Which are you referring to?
[21:42:52 CET] <CounterPillow> hello I am here for the boomer audiophile
[21:42:54 CET] <kepstin> intermodulation distortion would be what bad equipment does when given ultrasonics it can't reject or handle.
[21:42:58 CET] <FlipFlops2001> @kepstin: And don't forget slew-rate distortion.
[21:43:36 CET] <CounterPillow> FlipFlops2001: at 52, your hearing is worse than that of most people in here.
[21:44:01 CET] <FlipFlops2001> for everybody: All audio gear has IM distortions. Not having it is like trying to reproduce a pure sine-wave.
[21:46:07 CET] <FlipFlops2001> @CounterPillow: This is a quiz: Where (in frequency) does the human hearing first begin to worsen? Since you apparently know how I hear.
[21:47:17 CET] <CounterPillow> the upper spectrum of the hearing range, which is why kids can hear your shitty anti-vermin noise device on your car but you can't.
[21:47:41 CET] <kepstin> in most people, high frequency dropoff, usually starting not much past teens. listing to loud noise/music makes it go faster.
[21:47:53 CET] <FlipFlops2001> @CounterPillow: What is an "upper spectrum"?
[21:48:03 CET] <CounterPillow> *upper end of the audible spectrum
[21:48:53 CET] <CounterPillow> which for humans ends at 20 kHz. As much as you want to tell people you have a "golden ear" and exceptional bat-like hearing, everyone who has claimed this has never been able to prove it in double-blind tests.
[21:49:11 CET] <kepstin> to be specific, the highest frequency which is audible at levels below the pain threshold decreases with age.
[21:50:09 CET] <Hello71> FlipFlops2001: the main problem with your argument is that your argument disagrees with all carefully-conducted testing, the entire scientific community, and the fundamental basics of audio encoding known for several decades and used in every single actual audio codec.
[21:50:17 CET] <CounterPillow> Feel free to get a hearing test done and confirm this for yourself, good hearing labs will give you a nice plot of where your shitty spots are.
[21:50:23 CET] Action: kepstin assumes he can't hear anywhere near 20kHz anymore, but hasn't actually had it tested.
[21:50:36 CET] <Hello71> ... lossy audio codec, anyways.
[21:50:43 CET] <kepstin> and yeah, individuals may have hearing problems in other areas too of course.
[21:51:37 CET] <CounterPillow> inb4 something something quantum physics you can't test my incredibly good hearing!!11!1
[21:51:38 CET] <FlipFlops2001> @Everyone: In a human's coclea, the longest hairs are at the upper-midrange (+- 3kHz). Those are the hairs that become flat first. Therefore, that is where you start to lose your hearing first. That is also the range that we deceminate consonant sounds. That is why people who are losing their hearing often say they can hear but can't understand what people are saying.
[21:52:24 CET] <Hello71> so... what?
[21:52:56 CET] <Hello71> this is literally a climate change denier argument: deny, distort, redirect
[21:53:02 CET] <kepstin> sure, but they can still *hear* those frequencies, maybe compensate for them with an eq bump in the upper midrange or a hearing aid in extreme cases.
[21:53:18 CET] <kepstin> (in most cases, obviously "not all" applies here)
[21:53:28 CET] <FlipFlops2001> After losing the upper-midrange in your old-fart hearing, then the top-end goes, then everything else. Hope I don't live that long.
[21:53:37 CET] <jemius> Is it known yet when AV1 will be fully usable (in ffmpeg) ?
[21:53:52 CET] <kepstin> jemius: define fully usable?
[21:54:03 CET] <furq> it's already usable in ffmpeg, there's just no internal decoder
[21:54:14 CET] <jemius> not experimental anymore, works as intended, meaning it creates usable, unbroken videos
[21:54:18 CET] <FlipFlops2001> @kepstin: Yes all. Human physilogic. (is that spelled right?)
[21:54:19 CET] <kepstin> iirc the rav1e encoder driver got merged recently, and dav1d works as a decoder.
[21:54:34 CET] <Hello71> aom has been available for ages
[21:54:35 CET] <furq> it's created usable and unbroken videos for about a year
[21:54:42 CET] <furq> which is also how long it takes
[21:54:43 CET] <jemius> A Rust encoder in ffmpeg?
[21:54:58 CET] <kepstin> it's not in ffmpeg, external library. With a C api.
[21:55:02 CET] <Hello71> and one would hope that aom works fine if they're doing all their standards development based on that...
[21:55:21 CET] <furq> they froze the spec for realsies this time at least a year ago now
[21:55:21 CET] <jemius> kepstin, so where did it get merged to?
[21:55:24 CET] <CounterPillow> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2802451/ "Although the rate of change was greater at lower frequencies, the largest absolute changes in hearing thresholds with age were observed at higher test frequencies."
[21:55:32 CET] <Hello71> jemius: what?
[21:55:35 CET] <FlipFlops2001> I'm done. Just stickn' 'round to see what else is up!
[21:56:19 CET] <kepstin> jemius: git master, where else would it go?
[21:56:35 CET] <furq> there's been dav1d and aom support for a long time anyway
[21:56:47 CET] <jemius> I thought its the ffmpeg philosophy to have all codecs statically linked
[21:57:01 CET] <furq> it isn't but why would that make a difference
[21:57:03 CET] <Hello71> that is both wrong and unrelated
[21:57:06 CET] <furq> you can static link rust libraries if you want
[21:57:12 CET] <Hello71> yeah, that
[21:57:16 CET] <FlipFlops2001> @CounterPillow: Be specific-What frequencies? 3.5kHz could be considered "higher freq", so could 15kHz. Be specific.
[21:57:19 CET] <kepstin> I'm honestly disappointed that the c api for rav1e isn't called crav1e
[21:57:29 CET] <CounterPillow> I've literally linked the paper you twat
[21:58:14 CET] <jemius> ts ts ts, that's some vocabulary
[21:58:26 CET] <FlipFlops2001> @CounterPillow: Just a question, I'll look at the paper U Dickhead.
[21:58:40 CET] <CounterPillow> >The frequencies at which threshold changes were greatest depended on the age of participants. For the 4859 and 6069 years age groups, for example, threshold changes were largest at higher frequencies (38 kHz). For the older age decades (e80 years), the greatest changes in thresholds were noted for lower frequencies (0.52 kHz). As noted earlier, the smaller changes in threshold observed at higher frequencies are likely due to
[21:58:40 CET] <CounterPillow>  the greater baseline hearing loss at higher frequencies for the older adults. Accordingly, the dynamic range for potential change was reduced at higher frequencies.
[21:58:43 CET] <JEEB> CounterPillow is not yet in the zen of no care
[21:58:45 CET] <kepstin> CounterPillow: everytime we say "high frequencies", we're referring to frequency which is a person's absolute threshold of hearing, aka the highest frequency that is audible when played at levels below the threshold at pain
[21:58:53 CET] <kepstin> er, FlipFlops2001 ^^
[21:58:55 CET] <JEEB> may he attain peace soon
[21:59:08 CET] <TD-Linux> kepstin, it was briefly crav1e when it was a separate repo
[21:59:09 CET] <furq> i won't hold my breath
[21:59:10 CET] <kepstin> yeah, i still haven't worked out all my care yet either.
[21:59:13 CET] <CounterPillow> JEEB: "soon" as in when this boomer kicks the bucket, which is most likely going to be before me.
[21:59:47 CET] <furq> haven't you got virgil texas tweets to top reply to
[22:00:01 CET] <CounterPillow> I missed their debate commentary :(
[22:13:24 CET] <jemius> Does anyone here know what happened to Daala and Theora back in the day? I only heard one of them got used for the VPx codecs
[22:15:34 CET] <JEEB> jemius: daala people work on av1/x
[22:15:35 CET] <kepstin> theora was based on one of the old vpx codecs, iirc
[22:15:54 CET] <JEEB> and theora is where some people for daals came from
[22:15:59 CET] <JEEB> like derf etc
[22:17:12 CET] <CounterPillow> Theora was based on VP3 I believe, but I don't think VP8/9 are necessarily based on Theora in any way other than sharing common ancestry.
[22:19:49 CET] <jemius> did they reach a point where you could "use" them? (Using means you get proper videos, although with larger size than with MPEG)
[22:20:19 CET] <CounterPillow> Most codecs will produce proper videos if you throw enough bits at it
[22:20:31 CET] <CounterPillow> Some parts of Daala made it into AoM AV1, which is why #daala is now about rav1e AV1 encoder development, and much of VP10 made it into AoM AV1 too.
[22:21:25 CET] <jemius> so, realistically, the only video codecs which are somewhat on the level of the h.2xx family are VPx and now AV1 ?
[22:23:03 CET] <CounterPillow> That is correct, but if you are not a company you really don't need to worry about H.264/H.265 patents anyway.
[22:23:34 CET] <CounterPillow> Technically there is also that chinese codec davs2
[22:25:23 CET] <jemius> CounterPillow, how does this even work, why doesn't the mpeg foundation sue everyone to the ground for distributing the codec for free everywhere?
[22:25:35 CET] <CounterPillow> Because that's not how any of this works.
[22:25:58 CET] <jemius> It works this way with conventional industry patents
[22:27:31 CET] <klaxa> to build something according to a "conventional industry patent" you usually need more resources than a computer as it is with IP patents
[22:27:37 CET] <CounterPillow> H.264/H.265 are international standards ratified by government-sanctioned standards bodies for video compression. It just so happens that some of the industry who have contributed to the specification have patent claims on specific concepts used in the coding.
[22:28:23 CET] <CounterPillow> Most devices with hardware decoders get the necessary patent licenses as part of purchasing the decoder chip.
[22:29:20 CET] <klaxa> for the pi you can even retroactively purchase a license for the built-in hardware decoder for mpeg2 was it? because h264 was included by default i think?
[22:30:01 CET] <CounterPillow> Cisco has OpenH264 available because they pay so much to MPEG-LA they are allowed to ship everyone H.264 encoding/decoding capabilities without paying more licensing fees for it.
[22:31:40 CET] <CounterPillow> Also obviously nobody is insane enough to start suing random individuals because 1. lawsuits cost a lot, actual patent trolls are usually just after settlements, and MPEG-LA isn't a patent trol, 2. in various jurisdictions you have to face questions over the validity of software patents and the anti-competitive behaviour you're exhibiting
[22:35:36 CET] <furq> now hevc, on the other hand
[22:35:49 CET] <CounterPillow> yeah they have 3 patent pools because people got greedy
[22:35:53 CET] <furq> isn't it four now
[22:36:03 CET] <CounterPillow> possibly 5 by the time I finish typing this sentence.
[22:36:13 CET] <jemius> what the heck is a pool?
[22:36:41 CET] <CounterPillow> It's several companies with patent claims on a specific technology banding together so you can license all the patents covering that technology from one source.
[22:37:06 CET] <furq> mpeg-la, hevc advance, velos, and technicolor
[22:37:49 CET] <jemius> these people are annoying. Recently I've seen a presentation about codec2 and it said that he had only to replace 5% of the code of comparable codecs, the rest was already open source / free
[22:38:04 CET] <CounterPillow> It's not about open-source/free
[22:38:09 CET] <jemius> hey, technicolor. They made the worst router I ever owned
[22:38:11 CET] <furq> bear in mind a lot of those patents arguably cover vp[89]/av1
[22:38:27 CET] <CounterPillow> All MPEG codecs are developed on public mailing lists, with an open reference encoder and decoder
[22:39:37 CET] <JEEB> at multimedia conferences you have people making jabs at how the "open" media stuff is actually much more closed during development than the formats they're trying to become an alternative to
[22:39:39 CET] <furq> google/aom have legal defense funds for licensees but they do not offer full indemnification
[22:39:44 CET] <furq> so that's nice
[22:40:06 CET] <JEEB> but yes, patent licensing is something you have to take up with a lawyer if you really want to make business
[22:40:21 CET] <JEEB> same for VPx/AVx or oen of the ITU/ISO formats
[22:41:15 CET] <furq> with that said you can still theoretically get sued for using hevc even if you have four licenses to use it
[22:41:27 CET] <furq> so that's even nicer
[22:41:36 CET] <jemius> sounds good
[22:41:40 CET] <CounterPillow> Well, you can also get sued for using VPx or AV1
[22:41:43 CET] <JEEB> technically that goes for all formats
[22:41:44 CET] <furq> basically don't ever do anything
[22:41:47 CET] <JEEB> if you go that far
[22:41:49 CET] <furq> you'll be fine
[22:42:26 CET] <furq> JEEB: i'd say those are equally far away given nobody has yet been successfully sued for using vpx afaik
[22:43:14 CET] <JEEB> I am not sure how many legal cases there have been against formats and how many have been settled etc
[22:43:20 CET] <JEEB> IANAL as they say
[22:43:26 CET] <CounterPillow> Pretty sure that unless you're running netflix/youtube, nobody actually cares lol.
[22:43:46 CET] <furq> i know of at least one unsuccessful lawsuit against google for vp8 infringing h264 patents
[22:43:49 CET] <furq> i think by nokia
[22:44:05 CET] <JEEB> yes, nokia tried to troll around there and I'm not sure how successful that was, yes
[22:44:12 CET] <JEEB> although I don't remember how exactly that ended
[22:44:23 CET] <furq> anyway yeah you'll probably be fine
[22:44:24 CET] <void09> help, how to convert the 0x blabla ffmpeg error reporting offset into timestamp ?
[22:44:39 CET] <JEEB> void09: ?
[22:44:42 CET] <CounterPillow> ... what
[22:44:43 CET] <furq> that's a memory address, not a timestamp
[22:44:58 CET] <void09> I scanned a file for errors with ffmpeg and get the errors like:
[22:45:10 CET] <void09> [aac @ 0x55b3b5298400] channel element 0.0 is not allocated
[22:45:15 CET] <JEEB> yea that 's memory address
[22:45:30 CET] <JEEB> you would have to print the dts/pts of the packet you were handling
[22:45:39 CET] <void09> ok is there any way i can change it so I get at least a byte offset in the file ?
[22:45:57 CET] <JEEB> that'd be around the AVPacket, so not from that message no
[22:46:07 CET] <JEEB> also that's not necessarily corruption
[22:46:15 CET] <JEEB> that could just be that you're starting a stream in the middle
[22:46:23 CET] <void09> I am doing this at the moment ffmpeg -v error -i "outputvideo2.mkv" -f null - 2>test.log
[22:46:24 CET] <JEEB> like with broadcasts or live streams
[22:46:47 CET] <void09> JEEB: trying to error scan tv recordings that the recording software reported errors in
[22:47:09 CET] <JEEB> void09: yea that is not going to give you timestamps unless you do like -debug_ts or something
[22:47:13 CET] <void09> after i cut and make the .ts file into mkv sometimes the errors are gone, but sometimes a few still persist when running that command
[22:47:15 CET] <JEEB> I recommend just making an API client :P
[22:47:29 CET] <void09> what ? ;\
[22:47:32 CET] <JEEB> void09: if it's just in the beginning it just means that you didn't get enough packets to start decoding or whatever :P
[22:48:00 CET] <void09> JEEB: this is after cutting 15 minutes from start and end, so no
[22:48:14 CET] <JEEB> well even so? if you cut at arbitrary locations?
[22:48:34 CET] <JEEB> anyways, -debug_ts will get you timestamps
[22:48:36 CET] <void09> arbitrary ? I cut with ffmpeg, it should not produce errors that are not already in the stream
[22:48:36 CET] <jemius> Google started with the VP codecs because they wanted to avoid licensing foo, as I understood it. Does anyone know how it works when someone loads up mpeg material there; I assume it's transcoded on their servers?
[22:48:53 CET] <furq> void09: if you add -stats it'll print the stats line around the error messages
[22:48:59 CET] <furq> which is not a great solution but better than nothing
[22:49:15 CET] <furq> on account of it will print the stats line five million times and then there'll be three error messages
[22:49:38 CET] <JEEB> debug_ts basically logs everything in the framework, but I do recommend making one's own API client where you can see at which AVPacket the decoding barfs and get whatever data you might need for that
[22:50:06 CET] <void09> JEEB: I am really far from that skill level
[22:50:31 CET] <JEEB> but I know that you can get with broadcast streams errors for AAC f.ex. if you don't exactly cut at specific packets, and that is 100% OK and doesn't mean the stream is actually broken
[22:50:41 CET] <JEEB> of course if it happens in the middle of stream that can be a problem
[22:50:51 CET] <JEEB> as in, after you've got some actual audio out
[22:50:56 CET] <JEEB> and successfully decoded
[22:50:58 CET] <void09> oh, I did not know that
[22:51:27 CET] <void09> oh since we're here, is there any way to tell ffmpeg when using -ss -to to cut to the nearest keyframe INCLUDING those timestamps ?
[22:51:47 CET] <void09> it just chooses the nearest keyframe which might not include the exact timestamp i gave it
[22:51:50 CET] <JEEB> -ss should include the timestamp, but mpeg-ts cutting is inexact
[22:51:58 CET] <JEEB> since mpeg-ts is not indexed format
[22:52:04 CET] <JEEB> and thus seeks are not exact
[22:52:36 CET] <JEEB> there is a flag in the API to include the set value in the seek, as in seek before it and not after
[22:52:41 CET] <JEEB> and that is set in ffmpeg.c by default afaics
[22:52:44 CET] <void09> it does not include the timestamp, just cuts to the nearest keyframe, so i always have to do -500ms +500ms to make sure it's included
[22:53:07 CET] <void09> since the keyframe is abot every 1 second or so
[22:53:10 CET] <JEEB> if you have this issue with correctly flagged non-mpeg-ts (mp4, mkv etc)
[22:53:26 CET] <JEEB> then it might be an issue, but I know for a fact that mpeg-ts seeking is not exact
[22:53:33 CET] <JEEB> I also know that ffmpeg.c does with -ss seek to before
[22:53:39 CET] <JEEB> if the exact time doesn't have it
[22:54:00 CET] <void09> ffmpeg -i "video.ts" -ss 00:15:22.600 -to 02:19:27.760  -vcodec copy -acodec copy outputvideo.mkv
[22:54:05 CET] <void09> this is what I use now
[22:54:10 CET] <JEEB> welcome to mpeg-ts
[22:54:15 CET] <JEEB> FFmpeg does not index it
[22:54:19 CET] <JEEB> thus your seeks are inexact
[22:54:21 CET] <JEEB> gg kthx
[22:54:38 CET] <void09> but how come i get precise timestamps when seeking it in mpv ?
[22:54:42 CET] <void09> or are they not precise?
[22:54:45 CET] <JEEB> they are not
[22:54:59 CET] <JEEB> same stuff behind the scenes, possibly more trying but same API
[22:55:05 CET] <void09> cause i used the milisecond timestamps in mpv to figure out the cutting point
[22:55:42 CET] <JEEB> yea, ffmpeg.c uses AVSEEK_FLAG_BACKWARD
[22:55:46 CET] <JEEB> I just double-checked it
[22:56:01 CET] <void09> so what can i do about this.. use that slow seeking thing so that ffmpeg indexes the file ?
[22:56:09 CET] <void09> I think it was to use -ss before -i
[22:56:31 CET] <JEEB> there is no indexing, -ss after -i just used to decode until that point
[22:56:34 CET] <JEEB> which effectively worked :P
[22:56:35 CET] <void09> I got that suggestion here for making screenshots of a bluray m2ts file
[22:56:51 CET] <JEEB> ffms2 is an indexing library that utilizes FFmpeg libraries in the background
[22:56:56 CET] <void09> since by default they would come out glitchy/gray for some blurays
[22:57:20 CET] <void09> oh ok.. so what is the fix for my problem ? :)
[22:57:32 CET] <void09> just keep using -500ms +500ms as cutting points?
[22:57:40 CET] <JEEB> if you need to do copying of packets, you do your own indexing
[22:57:48 CET] <JEEB> if you need decoding, you start using ffms2
[22:58:22 CET] <void09> how do i do my own indexing?
[22:58:36 CET] <JEEB> you use the API, like ffmpeg.c (the cli app) or ffms2 do
[22:58:47 CET] <JEEB> (ffmpeg.c does no indexing but it uses the API)
[22:59:18 CET] <void09> ok that's a confusing answer
[23:00:12 CET] <JEEB> the FFmpeg API lets you do indexing, but it doesn't itself keep all the book-keeping for that frame exact use case. that is why things like ffms2 exist
[23:00:47 CET] <JEEB> like, FFmpeg has an API for seeking to a point, which is what ffmpeg.c uses - but it depends on how exact the container is
[23:00:57 CET] <JEEB> mpeg-ts has no index and thus if you get anywhere close you need to get, you're lucky
[23:01:35 CET] <void09> right so are you suggesting converting to mkv first, then doing the cutting ?
[23:01:36 CET] <JEEB> but if you only need to decode - ffms2 already does this. it just does not do packet copying :P
[23:01:43 CET] <void09> cause i do them all in one step
[23:02:02 CET] <JEEB> void09: mkv, mp4 or anything else with an index should get you far more exact seekign yes
[23:02:12 CET] <JEEB> mpeg-ts just happens to be an A->B container
[23:02:18 CET] <void09> ffmpeg -i "video.ts" -ss 00:15:22.600 -to 02:19:27.760  -vcodec copy -acodec copy outputvideo.mkv
[23:02:34 CET] <JEEB> where the best way to seek is through guesstimates based on file size
[23:02:34 CET] <void09> but i presume the seeking is still inaccurate, as it's done before mkv-ing it
[23:02:35 CET] <void09> right?
[23:02:38 CET] <JEEB> yes
[23:02:52 CET] <JEEB> if you need exact seeks in mpeg-ts you have to go A->B or index
[23:02:55 CET] <void09> ok so mkv first step, cutting second step
[23:03:43 CET] <void09> so ffmpeg (the cli client) can't do A->B ? (Frame accurate seeking)
[23:03:58 CET] <void09> and this is why ffms2 exists
[23:04:12 CET] <JEEB> it can, which is why if you skip -ss and stuff you can probably get a pretty good stream copy
[23:04:29 CET] <JEEB> but as soon as you get to -ss you enter av_seek_frame
[23:04:44 CET] <void09> ok then how do I do it without using -ss ?
[23:04:47 CET] <JEEB> which will attempt to seek to a packet either after or before that point as well as the container can do
[23:05:39 CET] <JEEB> void09: by writing an app that discards packets until it hits the random access point?
[23:05:53 CET] <JEEB> since I know ffmpeg.c earlier used to do decoding fine, but copying packets less so
[23:06:11 CET] <JEEB> since -ss after -i worked for *decoding* but you're doing packet copying
[23:06:25 CET] <void09> I am ? I was not aware of that
[23:06:43 CET] <JEEB> -c copy is packet copying
[23:06:55 CET] <JEEB> input parser gets you packet, output writer receives that packet
[23:07:25 CET] <void09> I don't know, its' just stuff i found by googling that worked and i stuck with it : )
[23:07:57 CET] <JEEB> with decoding it used to work that -ss after input (-i BLAH) decoded to that point, and then did whatever
[23:07:58 CET] <void09> but yes, I want no encoding, just cutting and making an mkv of the av stream. I don't like encoding much
[23:08:20 CET] <JEEB> but with having packets copied -ss only worked before input and thus on the input layer, before decoding
[23:08:32 CET] <JEEB> and thus you end up with av_seek_frame
[23:09:02 CET] <JEEB> which seeks to either after or before that point you specified depending on flags (I'm pretty sure it sets the flag to havei t be before due to how I've done -c copy)
[23:09:28 CET] <JEEB> but it also means that if you have a non-exact container
[23:09:42 CET] <JEEB> then both middle fingers up for you trying to get exact seeking
[23:09:50 CET] <void09> ok, so this is what the question was about, how to make it cut before or equal to the first point , and >= the end point
[23:10:04 CET] <JEEB> it as far as I can tell is supposed to cut before
[23:10:25 CET] <JEEB> if not, that is most likely due to inexact seeking in your favourite streaming container
[23:10:59 CET] <void09> well, since mpv uses ffmpeg to play files, could it be I am not getting perfectly accurate timestamps when seeking, and thus ffmpeg behaves correctly, I just have the wrong times?
[23:11:10 CET] <void09> right..
[23:11:45 CET] <JEEB> mpv also behind the scenes uses av_seek_frame for mpeg-ts. the amount of information the parser has depends on teh I/O layer, but I'd expect it to be similar
[23:11:57 CET] <JEEB> although mpv in theory could be using byte wise seeking
[23:12:07 CET] <JEEB> i haven't checked if anyone cared to implement that
[23:12:20 CET] <JEEB> probably not since mpeg-ts with timestamps going 'round doesn't seek too well
[23:12:30 CET] <JEEB> (mpeg-ts goes up to 26.5 or so hours)
[23:12:37 CET] <JEEB> and then the timestamps wrap around
[23:12:43 CET] <void09> oh I didn't know that
[23:12:45 CET] <JEEB> you can guess how fun seeking around that with timestamps is :P
[23:13:38 CET] <void09> well, what I did until now, substract/add ~500 something ms from begin/end vs the timestamps mpv shows, always got me the correct result
[23:13:46 CET] <void09> and i've done at least 50 cuts
[23:14:26 CET] <void09> so I assumed it seesks to the nearest keyframe instead of including it
[23:14:29 CET] <JEEB> you can do ffprobe -of json -show_streams -show_programs -show_packets -i INPUT > blah.json
[23:14:40 CET] <void09> where can i see the possible flags for this seeking stuff ?
[23:14:40 CET] <JEEB> and then see the packets in that JSON output
[23:15:38 CET] <JEEB> ah right, ffmpeg.c uses avformat_seek_file
[23:15:53 CET] <JEEB> but still with the parameters it utilizes it should seek before
[23:16:05 CET] <JEEB> ret = avformat_seek_file(is, -1, INT64_MIN, is->start_time, is->start_time, 0);
[23:16:11 CET] <JEEB> start_time is what you put into -ss
[23:16:50 CET] <JEEB> and given that -ss has generally worked for me as I expected, I would guesstimate that it actually does that and the container is making it derp
[23:16:58 CET] <JEEB> since with mpeg-ts seeking is all sorts of guesstimation
[23:17:06 CET] <JEEB> welcome to containers without an index :P
[23:17:46 CET] <JEEB> basically this is why ffms2 exists for people who want to do frame exact seeking and decoding
[23:18:22 CET] <void09> and to think they put something like that in blurays :\
[23:18:30 CET] <JEEB> yes
[23:18:53 CET] <void09> i don't know about the m2ts overhead, but the .ts one is pretty ridiculous
[23:19:01 CET] <void09> i've seen 5-10% space overhead
[23:19:07 CET] <JEEB> same format
[23:19:23 CET] <JEEB> blu-ray just has a 4 byte timestamp after each packet or soething
[23:19:27 CET] <JEEB> so 188+4
[23:19:30 CET] <JEEB> as opposed to 188
[23:19:41 CET] <void09> so it has timestamps ?
[23:19:50 CET] <JEEB> even standard mpeg-ts has timestamps yes
[23:19:52 CET] <JEEB> but no index
[23:20:02 CET] <JEEB> neither does the blu-ray 192 byte variant (188+4)
[23:20:14 CET] <JEEB> or well, blu-ray has playlists which have chapters
[23:20:17 CET] <void09> so when I am recording, and the rec software saves a .ts.. does it write its own timestamps ?
[23:20:23 CET] <JEEB> which point towards a BYTE offset
[23:20:42 CET] <JEEB> so literally blu-ray playlists say "byte X from file Y"
[23:20:43 CET] <JEEB> or so
[23:20:44 CET] <JEEB> :P
[23:20:48 CET] <void09> I did the json packet thing with ffprobe you mentioned, and the first one i get is an audio packet with : "pts_time": "0.003000",
[23:21:15 CET] <JEEB> yes, they come in the order that the parser reads them
[23:21:55 CET] <void09> I wonder if it's possible to record tv streams directly to mkv
[23:22:13 CET] <void09> probably not since it does not know when it will end?
[23:22:13 CET] <JEEB> I don't see any reason not to, although generally I like to dump the broadcast as-is
[23:22:22 CET] <JEEB> matroska does not need a duration
[23:22:30 CET] <JEEB> you need to write hte index at some point at the end
[23:22:33 CET] <JEEB> if you need exact seeking
[23:22:42 CET] <JEEB> but you can keep writing it through similarly to mpeg-ts
[23:22:47 CET] <kepstin> you don't need accurate seeking for just skipping around in bd playback, just go forward/back some number of bytes based on the estimated bitrate, find the next keyframe, and start playing.
[23:22:56 CET] <JEEB> yes
[23:22:59 CET] <kepstin> good enough for a consumer player skip feature
[23:23:03 CET] <JEEB> it's mostly bit rate based
[23:23:10 CET] <JEEB> since all of the blu-rays generally are very static in bit rate
[23:23:29 CET] <JEEB> and playlists have byte offsets so those longer seeks work like that
[23:27:32 CET] <void09> ok i added the -stats option somebody mentioned before to find errors, and guess what, my error is right in the last second of the file
[23:27:58 CET] <void09> [aac @ 0x56310cc43800] channel element 0.0 is not allocatede=N/A speed=27.4x
[23:27:58 CET] <void09> Error while decoding stream #0:1: Invalid data found when processing input
[23:27:58 CET] <void09> frame=241207 fps=802 q=-0.0 Lsize=N/A time=02:40:48.84 bitrate=N/A speed=32.1x
[23:28:03 CET] <JEEB> yea you can also get errors at the end of streams where the decoder expects more packets yet you ran out
[23:28:14 CET] <JEEB> I've esp. had that with AC-3
[23:29:05 CET] <void09> yes but it was a cut with ffmpeg stream..
[23:29:28 CET] <void09> oh, so you mean, ac3/aac work like keyframes in videos do ?
[23:30:10 CET] <void09> and since the sound follows the video, when copying packets, it might happen to hit a non-key frame sound packet ? :)
[23:30:38 CET] <JEEB> well stream copy generally starts with a random access point, but you might stop at a random packet
[23:30:38 CET] <void09> I mean, the sound also has its own time stamps, right ? in the .ts
[23:30:56 CET] <JEEB> and if you just told ffmpeg.c to stop and decoder is after packet handling
[23:31:07 CET] <JEEB> -> you can stop before a full audio frame has been decoded
[23:31:31 CET] <JEEB> more likely with mpeg-ts input where you might or might not get a full PES packet
[23:31:39 CET] <JEEB> when you stop recording your input
[23:33:25 CET] <void09> well, I have 15 minutes of before/after padding in my recordings, cause sometimes they're not right on time (still miss some even with that, but I can't add more as it would be impractical)
[23:34:29 CET] <void09> what I do now to remove the <=24 frames or so of possible extra frames i get at the beginning/end after doing the ffmpeg cut and mkv conversion, is add a chapter to the mkv, and specify begin/end timestamps so it ignores the extra frames
[23:35:13 CET] <void09> I wonder if ffmpeg ignores chapters present in mkv, unlike players
[23:35:39 CET] <JEEB> if I need to process mpeg-ts I would have either indexed or done A->B reading for copying packets
[23:35:55 CET] <JEEB> depending on how exact etc cuts have to be
[23:36:02 CET] <JEEB> and how much overhead doing A->B always would bring :P
[23:36:09 CET] <void09> processing overhead?
[23:36:29 CET] <JEEB> indexing is mostly I/O
[23:36:31 CET] <void09> not much,  I have 3GB/s nvme and 8 core
[23:36:49 CET] <JEEB> but yea, you need your own API client to get that doje
[23:36:51 CET] <JEEB> *done
[23:37:00 CET] <JEEB> ffmpeg.c is not going to do that for you out of the box
[23:37:09 CET] <JEEB> I usually copy from mpeg-ts without seeks etc to mp4
[23:37:10 CET] <void09> sigh. such basic stuff missing :)
[23:37:17 CET] <JEEB> not really basic is it?
[23:37:29 CET] <JEEB> having to index a whole input is not what lavf expects
[23:37:30 CET] <void09> of course it is, first use case i have for ffmpeg. and it can't do it (precisely)
[23:37:48 CET] <void09> but i will try mkving the whole thing first, and then cutting
[23:38:03 CET] <JEEB> most people do either A->B stuff, or use indexed containers
[23:38:12 CET] <void09> JEEB: cuts can only really be done on keyframes, no ? without re-encoding
[23:38:24 CET] <void09> or I-frames
[23:38:34 CET] <JEEB> starting with random access points yes, the flag on ffprobe's packets is keyframe yes
[23:38:39 CET] <JEEB> since the naming is from simpler times
[23:38:48 CET] <JEEB> when "keyframe" was still a random access point and vice versa :P
[23:39:15 CET] <JEEB> void09: keyframe/I frame just means that the single picture can be decoded since it's most likely intra
[23:39:23 CET] <JEEB> it gives no promises about latter frames
[23:39:36 CET] <JEEB> that is why the wording "random access point" is what people use now
[23:39:46 CET] <void09> oh ok
[23:39:51 CET] <JEEB> a random access point promises decode'ability of things onwards from that point, too
[23:40:01 CET] <void09> so there's no way around it. other option would have been to cut at "random access point" and encode the few frames outside of it
[23:40:33 CET] <void09> or the way i do it, ignore the max 50 (average 25) spaced used by extra frames and implement a chapter to skip them
[23:41:13 CET] <void09> I looked into the first option but it seemed to be a big hassle, video encoder needing the same encoding parameters used for the rest of the stream
[23:42:59 CET] <void09> I just realised that error log I pasted eralier actually told me the total video time, and not the time at which the error occured
[23:43:15 CET] <void09> [aac @ 0x56310cc43800] channel element 0.0 is not allocatede=N/A speed=27.4x
[23:43:15 CET] <void09> Error while decoding stream #0:1: Invalid data found when processing input
[23:43:15 CET] <void09> frame=241207 fps=802 q=-0.0 Lsize=N/A time=02:40:48.84 bitrate=N/A speed=32.1x
[23:43:50 CET] <void09> let's try with -debug_ts
[23:43:54 CET] <JEEB> void09: for the record, I did write myself a thing that would attempt to get me a specific frame at a timestamp without using ffms2
[23:43:59 CET] <JEEB> a few years ago
[23:44:23 CET] <JEEB> that just solidified my understanding of "lol mpeg-ts"
[23:44:36 CET] <JEEB> because I could get really nice seeks there :P
[23:44:36 CET] <void09> so furq adding -stats did nothing for it
[23:44:50 CET] <JEEB> if you need to handle mpeg-ts packet-exact
[23:44:52 CET] <JEEB> you do indexing
[23:45:11 CET] <void09> I need time-exact :)
[23:45:19 CET] <JEEB> well yea
[23:45:35 CET] <JEEB> in other words, you either need A->B (if you are sure the time is there in that clip) or indexing
[23:45:46 CET] <JEEB> indexing quicker after you go through the whole shebang once
[23:46:22 CET] <JEEB> ffmpeg.c does not do this for you since it is just a thin wrapper on the FFmpeg APIs. you can do indexing with the APIs but it's not something that the framework does for you.
[23:46:42 CET] <JEEB> (ffms2 is an example of people wanting frame exact access)
[23:46:44 CET] <void09> Right, I fully understand now, after much initial confusion
[23:47:09 CET] <JEEB> but yea, what writing a test app myself taught to me with mpeg-ts and trying to get to a specific time
[23:47:12 CET] <JEEB> is that you should index
[23:47:13 CET] <JEEB> :D
[23:47:58 CET] <JEEB> also if you think you can handle seeking better in the mpeg-ts module
[23:48:04 CET] <JEEB> that is also open source of course
[23:48:11 CET] <JEEB> so watches are very much pelcome
[23:49:45 CET] <void09> I'm not a coder
[23:49:54 CET] <void09> well, i am, but a very noob/slow/limited one
[23:52:21 CET] <JEEB> ah, right
[23:52:33 CET] <JEEB> libavformat/mpegts.c actually has no seek function
[23:52:51 CET] <JEEB> thus it is the general libavformat framework trying to come up with wtf to do
[23:52:52 CET] <JEEB> :)
[23:53:40 CET] <JEEB> you could in theory add a seek function that then attempts to do things, but that can lead to fun side effects
[23:54:34 CET] <void09> [h264 @ 0x5576bbd3f3c0] error while decoding MB 0 30, bytestream 2375
[23:54:47 CET] <void09> ok, what does that mean.. i used debug_ts
[23:55:10 CET] <void09> or this: [h264 @ 0x5576bbbc3c80] error while decoding MB 109 63, bytestream 2037
[23:55:12 CET] <JEEB> that is unrelated to debug_ts, just an error while trying to get an image decoded from a video packet
[23:55:28 CET] <JEEB> if that happens outside of start and end of stream, it might be bafd
[23:55:55 CET] <JEEB> (becuase live stremas can start and stop at random points)
[23:56:00 CET] <void09> oh wait you are right.. so debug_ts did NOTHING
[23:56:11 CET] <void09> maybe because i ran it on an mkv ?
[23:56:24 CET] <JEEB> debug_ts will give you full debug lines of ffmpeg.c passing things around
[23:56:34 CET] <JEEB> so you can see with what sort of timestamps it read a packet
[23:56:42 CET] <JEEB> how it passed it on/normalized it etc
[23:56:55 CET] <JEEB> since at one point you were asking about exact packet logging
[23:56:56 CET] <void09> ffmpeg -v error -debug_ts -i "file.mkv" -f null - 2>test2.log
[23:56:58 CET] <JEEB> for timestsamps etc
[23:57:00 CET] <void09> have i used it wrong ?
[23:57:25 CET] <JEEB> void09: a lot of -debug_ts logging is under normal info log level
[23:57:35 CET] <JEEB> so if you limit yourself to errors or more important only
[23:57:39 CET] <JEEB> you do not get them :P
[23:58:14 CET] <void09> ohhh
[23:58:21 CET] <JEEB> anyways, g'night. if you need indexing packet copying, do that through either mp4 or mkv in hte middle
[23:58:34 CET] <void09> right, thanks a bunch for .nfo
[00:00:00 CET] --- Fri Nov 22 2019


More information about the Ffmpeg-devel-irc mailing list