[Ffmpeg-devel-irc] ffmpeg.log.20160803
burek
burek021 at gmail.com
Thu Aug 4 03:05:02 EEST 2016
[00:04:21 CEST] <yoan> Hey guys, I need to smooth the brightness of 500 jpeg pictures to create a clean timelapse, what option would you use?
[00:05:22 CEST] <yoan> I found -morph that might do the job but I don't want to add new pictures, just adjust brightness of each picture according to the previous and the next ones
[00:08:25 CEST] <yoan> Oops I wanted to join #imagemagick instead :)
[00:09:14 CEST] <yoan> Anyway if you have advices to make nice (and smooth) timelapses from HD webcam pictures feel free to tell :)
[05:04:30 CEST] <Legimet> Hi
[05:05:39 CEST] <Legimet> I'm using eye tracking glasses which can be made to output raw H.264 packets
[05:06:03 CEST] <Legimet> and I'm trying to figure out how I can combine these files into an mp4
[05:07:50 CEST] <Legimet> Each frame is in a separate file
[05:09:01 CEST] <Legimet> my attempt to do this: http://pastebin.com/w6dHa0fR
[05:09:17 CEST] <Legimet> any ideas?
[06:06:37 CEST] <alyawn> I used git bisect to figure out where RTSP broke for me
[06:07:05 CEST] <alyawn> this is the first 'bad' commit: http://pastebin.com/wJnnH21b
[06:34:07 CEST] <alyawn> if I comment out this line https://github.com/FFmpeg/FFmpeg/blob/00e122bc0f2a4d867797f593770f9902f275b864/libavformat/rtsp.c#L839 it no longer crashes
[10:09:30 CEST] <kdehl> Um. So what is cbk in the openh264 API?
[12:59:48 CEST] <whald> hi! is there an alternative to av_read_frame which does not read full (video-) frames but gives back smaller units like h264 slices which can be fed to an decoder?
[13:00:58 CEST] <BtbN> I don't think there is anything in lavc capable of doing so. Unless you add an actual hwaccel to lavc itself.
[13:01:28 CEST] <whald> (i'm trying to get my latency down and also to spread out the cpu load, so i don't have to read a full frame and then decode it but rather have reading off the network and decoding interleaved)
[13:02:00 CEST] <whald> BtbN, how does that relate to hwaccels? could you please elaborate?
[13:02:22 CEST] <flux> whald, maybe you could look into how av_read_frame does its magic and work on that?
[13:04:42 CEST] <whald> flux, seems i'll have to go that route. just thought someone here has already been there. :-)
[13:05:04 CEST] <bencoh> you want a "framer" that outputs slices instead of frames, basically?
[13:06:04 CEST] <whald> bencoh, i think so?
[13:06:38 CEST] <BtbN> That's all internal to the lavc h264 decoder. Only the hwaccels get access to the slice data, as they are essentialy a part of it.
[13:08:37 CEST] <whald> i'm not exactly sure how all this fits together, but maybe i don't actually need slices but rather "demuxed bitstream", and the h264 dec would figure out the slice bondaries or whatever itself anyway. true?
[13:08:50 CEST] <BtbN> I'm not sure if you can register a new hwaccel from outside of lavc. But I'm quite sure the required functions and structures for that are private api.
[13:09:12 CEST] <bencoh> what is "demuxed" bitstream?
[13:09:36 CEST] <BtbN> Probably what falls out of lavformat?
[13:09:43 CEST] <whald> bencoh, i mean the raw h264 bitstream, without the mpegts headers (i'm using mostly mpegts)
[13:10:00 CEST] <bencoh> then you just need a regular demuxer/framer, but one that doesn't add latency
[13:10:01 CEST] <BtbN> If your decoder is fine with that, sure. But most hardware accelerators aren't.
[13:10:31 CEST] <bencoh> whald: how do you use avformat? feed it with buffers, or straight from network?
[13:10:32 CEST] <BtbN> Might want to use the annexb bsf on top.
[13:12:21 CEST] <whald> bencoh, i'm letting avformat do the network reading and then use av_read_frame and feed that to avcodec_send_packet followed by avcodec_receive_frame
[13:12:50 CEST] <bencoh> yeah well then you already get a "huge" latency/buffering hit there
[13:13:05 CEST] <whald> bencoh, my observation was that every send_packet is followed by an successfull receive_frame, which made me wonder how that can be.
[13:13:14 CEST] <bencoh> I dont like advertising here but you might want to have a look at upipe.org and its mpegts demuxer / h264 framer at some point ....
[13:15:10 CEST] <BtbN> And why not just use lavf? Sounds like what you want. It does not do anything to the bitstream though, it just demuxes it.
[13:15:39 CEST] <BtbN> lavc takes the bitstream, and gives you decoded frames.
[13:15:48 CEST] <BtbN> So if you want the bitstream, don't use it?
[13:16:33 CEST] <whald> bencoh, i'm having a look. maybe you can tell me why i would get that "huge" latency hit? is there anything besides that the demuxer seems to buffer a full frame?
[13:19:42 CEST] <bencoh> haven't had a look at that for a long time (>year), so ...
[14:28:08 CEST] <jn_jn> hi all, :D can someone help me improve this command? http://pastebin.com/j15dDTsu
[14:40:20 CEST] <biapy> hi
[14:46:15 CEST] <jn_jn> hi
[15:17:05 CEST] <DHE> jn_jn: so what's wrong with it?
[15:17:26 CEST] <DHE> only thing that stands out is you didn't really specify a primary bitrate
[15:54:28 CEST] <kdehl> So, um, you guys seem to know H.264 pretty well, so I'll just throw this question out here. I'm confused: One frame can contain several bitstreams which in turn consist of several NALs?
[15:55:02 CEST] <kdehl> What is the difference between a bitstream and a NAL anyway?
[15:55:26 CEST] <JEEB> bitstream is a single stream of things, usually a single stream (video, audio, subtitles)...
[15:55:33 CEST] <JEEB> NAL is an AVC, HEVC thing
[15:55:43 CEST] <JEEB> a packet, if to say so
[15:56:21 CEST] <JEEB> you have parameter sets (setup packets in a way), SEI (extra info), VCL (Video Coding Layer) NALs
[15:56:24 CEST] <JEEB> as some examples
[15:56:31 CEST] <JEEB> last one actually contains compressed pictures
[15:56:39 CEST] <kdehl> Aha. So a H.264 frame can contain both a video and an audio stream, for example?
[15:56:42 CEST] <JEEB> no
[15:56:49 CEST] <kdehl> Hm. Okay.
[15:56:52 CEST] <JEEB> usually a bit stream is a single parse'able thing
[15:57:02 CEST] <JEEB> either a container or a single stream that would otherwise be put into a container
[15:57:05 CEST] <kdehl> Alright.
[15:57:17 CEST] <JEEB> usually you mean a single stream's bit stream
[15:58:07 CEST] <kdehl> Hm. Okay.
[16:02:05 CEST] <kdehl> So a single frame does not comsist of several bitstreams?
[16:02:10 CEST] <kdehl> *consist
[16:03:17 CEST] <DHE> a container is something like MP4, MKV, and so on. it contains several streams (video, audio, alternative language audio, subtitles, and so on)
[16:03:26 CEST] <kdehl> This is a code snippet that I have added (probably inaccurate) printfs to:
[16:03:28 CEST] <kdehl> https://paste.kde.org/pygsc2r0r
[16:03:35 CEST] <DHE> as a codec H264 has per-frame chunking for its own metadata divisions
[16:03:46 CEST] <kdehl> DHE: Right.
[16:18:32 CEST] <jn_jn> DHE, sry for my late, what you mean with specify a primary bitrate?
[16:19:05 CEST] <DHE> jn_jn: -b:v parameter
[16:21:12 CEST] <jn_jn> ok, but in this case, since i'm a noob in this kind of things, which one would be the right parameter? ty in advance
[16:21:42 CEST] <DHE> if you're using -maxrate you're probably looking for cbr or constrained vbr, in which case I usually go with the same as maxrate
[16:44:38 CEST] <doktorodd> Hi there! I am having a trouble using a pipe input. I am trying to receive an RTMP stream via netcat (nc -l -vvv -p 1935) and pipe it to ffmpeg (ffmpeg -v debug -y -analyzeduration 15M -probesize 15M -i pipe:0 out.mp4) - I see that my client connects but then all I get is `pipe:0: Invalid data found when processing input`
[17:32:24 CEST] <cryptopsy> ffprobe is showing a field 'lyrics-None-eng' , is that the name of the field in the id3v2 spec?
[17:33:04 CEST] <cryptopsy> i don't remember that being in the spec, i thought it was called 'Comment'
[17:33:19 CEST] <cryptopsy> but its aligned with date, encoder, title, album, artist, and the other fields
[17:34:14 CEST] <cryptopsy> i would like to edit such a field on other files, how can i do that?
[18:38:47 CEST] <bp0> cryptopsy, try mp3tag?
[18:40:48 CEST] <cryptopsy> i dont see the lyrics "tag" in ncmpcpp
[18:41:00 CEST] <cryptopsy> and the comment tag in ncmpcpp is showing as empty, what is it?
[18:43:05 CEST] <bp0> or similar tool specifically for that
[18:46:38 CEST] <bp0> try something like exiftool or mutagen-inspect to dump and see the actual tag used
[18:47:45 CEST] <cryptopsy> exiftool shows the field as 'User Defined Text'
[18:47:55 CEST] <cryptopsy> what standard specifies this field?
[18:51:51 CEST] <c_14> cryptopsy: http://id3.org/id3v2.3.0#Unsychronised_lyrics.2Ftext_transcription ?
[18:52:08 CEST] <cryptopsy> id3v2, ok i thought it could have been an earlier version
[18:52:38 CEST] <c_14> The formal standard is here http://id3.org/id3v2-00
[18:53:10 CEST] <c_14> v1 only has comment afaik
[18:54:49 CEST] <cryptopsy> i did not know exif can look at mp3 files
[19:08:38 CEST] <wallbroken> c_14, there is a better precision with 32bit ffmpeg or 64bit ffmpeg?
[19:08:59 CEST] <c_14> Shouldn't matter (except maybe in asm)
[19:09:37 CEST] <c_14> There may be a few locations where ffmpeg doesn't use fixed-length data types, but it shouldn't be important in those cases.
[19:09:48 CEST] <wallbroken> c_14, somebody told me that 32bit has more precision
[19:10:11 CEST] <c_14> Why would 32bit have more precision?
[19:10:13 CEST] <c_14> There's less bits.
[19:10:45 CEST] <wallbroken> <jkqxz> No, 32-bit has more precision. It uses the legacy x87 FPU which keeps things in 80-bit internally, while SSE gives you the normal 64-bit precision.
[19:11:53 CEST] <c_14> internal-precision usually isn't that important since you'll lose most (if not all) of it during rounding to get back to 32 (or 64) bit
[19:12:07 CEST] <c_14> Depends maybe on how many operations you can chain without rounding
[19:13:06 CEST] <wallbroken> c_14, i need to chose between 32bit and 64 and i do not care about the encoding speed
[19:14:14 CEST] <c_14> I don't think it'll matter, honestly.
[19:15:04 CEST] <wallbroken> i need to choose, i do it with throwing a coin?
[19:15:28 CEST] <c_14> Whatever's cheaper/easier for you.
[19:15:31 CEST] <klaxa> 32 bit will probably be unsupported earlier than 64 bit (just a guess)
[19:19:10 CEST] <wallbroken> unsupported by what?
[19:19:37 CEST] <klaxa> linux distros maybe? i don't know
[19:19:48 CEST] <DHE> you can still compile with -m32
[19:48:47 CEST] <CFS-MP3> Are the DVB subtitles known to use a larger font in ffmpeg than they do on a TV? I'm comparing the output from [0:v][0:s]overlay[v] which I thought would provide the exact image as on a TV but the font in this case is larger
[19:56:29 CEST] <CFS-MP3> scratch the question, it cannot be a font issue since DVB is bitmap based... so I'm quite puzzled here
[19:57:27 CEST] <klaxa> maybe your dvd player/bluray player/whatever scales differently than ffmpeg
[20:01:57 CEST] <CFS-MP3> klaxa it's a TV :-)
[20:02:39 CEST] <klaxa> then it falls in the category "whatever"
[20:25:46 CEST] <dexstair> Hey guys, I'm trying to use ffmpeg to test out vaapi encoding. I installed ffmpeg with ./configure --enable-vaapi but when I try pass a video to ffmpeh I get an unrecognised option vaapi_device. am I missing something?
[20:29:40 CEST] <kepstin> dexstair: you're probably just running the wrong ffmpeg executable (maybe an old system one rather than the one you just built)
[20:30:58 CEST] <dexstair> ahh, where would the one I built be? I'm getting better at using 'nix, but I'm still a little clueless about this stuff. I was a little worried about building and installing ffmpeg, knowing that I already had one installed
[20:32:35 CEST] <dexstair> the executable in /usr/local/bin gives me the same issues
[20:35:03 CEST] <kepstin> dexstair: did you install the newly built one?
[20:35:15 CEST] <dexstair> sudo make install, yup
[20:35:25 CEST] <kepstin> you can try running it directly from the build directory (e.g. with a full path, or cd there and run ./ffmpeg)
[20:35:35 CEST] <dexstair> I tried that
[20:36:30 CEST] <kepstin> hmm. where did you get the ffmpeg source? is it a 3.1 (or 3.1.1) tarball? or git?
[20:36:30 CEST] <dexstair> my only steps for installing were to cd into the newer ffmpeg directory ./configure --enable-vaapi make sudo make install
[20:36:46 CEST] <jkqxz> Did you actually build the lastest version? (Ideally from straight from git, though 3.1.1 is also new enough.)
[20:37:08 CEST] <dexstair> I'm using Fedora 24, have I missed out building an encoder?
[20:37:18 CEST] <dexstair> 3.1.1 is what's installed
[20:38:06 CEST] <dexstair> straight from the ffmpeg website.
[20:38:16 CEST] <dexstair> are there any logs I can show you?
[20:38:28 CEST] <kepstin> hmm. I wonder if you're missing some dependencies, and --enable-vaapi silently ignored that rather than caused configure to fail
[20:38:40 CEST] <kepstin> dexstair: pastebin the output from ./configure --enable-vaapi maybe?
[20:39:24 CEST] <jkqxz> The configure output should show "h264_vaapi" in the encoders section. If it doesn't, you can look in config.log to find out why not.
[20:41:49 CEST] <dexstair> http://pastebin.com/Vkb685Mi I'm getting fairly proficient in the cli. amusing myself with stdin stdout. anywho, here's the log.
[20:43:46 CEST] <kepstin> yeah, it didn't enable any of the hwaccels there
[20:44:00 CEST] <kepstin> you're probably missing the vaapi system libraries
[20:44:34 CEST] <kepstin> (imo, that really should be an error if you use --enable-vaapi, but ffmpeg's configure script is... rather nonstandard)
[20:44:54 CEST] <dexstair> http://pastebin.com/g7vJtaXJ config.log
[20:45:44 CEST] <dexstair> I have the intel drivers installed and I'm able to do a rudimentary vaapi encode within transmageddon
[20:47:01 CEST] <kepstin> you said you're on fedora, right? try installing the 'libva-devel' package
[20:47:10 CEST] <dexstair> cheers
[20:48:27 CEST] <dexstair> installed, I'll try building again
[20:49:50 CEST] <dexstair> I can see vaapi encoders listed in the ./configure output
[21:13:50 CEST] <dexstair> vaapi encoding is working, but 1080p content still uses a fair amount of cpu usage
[21:14:07 CEST] <dexstair> the one error/warning I get is mp4 @ 0x2f40fa0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
[21:16:07 CEST] <jkqxz> What are you using to do the decode? A full hardware transcode will run with close to zero CPU use, but if the decode isn't in hardware as well then you will quickly be constrained by that.
[21:16:55 CEST] <dexstair> ffmpeg -vaapi_device /dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -i /mnt/f892940f-33f7-47ae-9fd5-140c53b4b697/Videos/Anime/Paranoia\ Agent\[BDRIP\]\[1080P\]\[H264\(Hi10P\)_FLAC\]\ \(with\ a4e\ subs\)/\[Paranoia\ Agent\]\[01\]\[BDRIP\]\[1080P\]\[H264\(Hi10P\)_FLAC\]\ \(with\ a4e\ subs\).mkv -an -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi output.mp4
[21:17:16 CEST] <kepstin> dexstair: vaapi decoder doesn't support 10bit, so you're out of luck there :/
[21:17:23 CEST] <kepstin> (it's a hardware limitation)
[21:17:33 CEST] <dexstair> Haha, okay :) makes sense.
[21:18:22 CEST] <dexstair> Thank you for all the help :D I'm now up and running, and you're all incredibly helpful people.
[21:18:54 CEST] <kepstin> keep in mind that the hardware h264 encoders are generally pretty fast, but not particularly good efficiency. In most cases (unless you're really cpu/speed limited for some reason), x264 is a better option.
[21:20:02 CEST] <dexstair> I have a bunch of old media that I need to re-encode. I brought my haswell for QS/VAAPI capability due to it being very very energy efficient.
[21:43:32 CEST] <thebombzen_> okay so I've asked this question many times and afaik I've never gotten an answer to it
[21:44:52 CEST] <thebombzen_> when you use -vf subtitles=subsfile.mkv, if you use fastseeking (i.e. -ss before -i), then it'll start the subtitles at the beginning of the stream. this means if you use -ss 2:00, the video will start two minutes in, but the subtitles will be the ones that would display at 0:00 if you were to play the video
[21:45:28 CEST] <kepstin> right, because the seeking is per-input, so it only applies to the video, not to the separate input stream used to read the subtitles.
[21:45:36 CEST] <thebombzen_> however, if you use slowseeking (-ss after -i), then it'll work as expected.
[21:46:00 CEST] <thebombzen_> slowseeking, has the problem that it's slow though. Ideally, there'd be some kind of -ss option in -vf subtitles, but I have never seen it.
[21:46:13 CEST] <kepstin> ah, that's because of interesting quirks with how seeking handles timestamps
[21:46:49 CEST] <thebombzen_> is there some kind of -vsync option to allow this to work with fastseeking?
[21:46:59 CEST] <kepstin> I do agree that the 'subtitles' filter could use a seek or timestamp offset option
[21:47:42 CEST] <kepstin> you can probably get it to work by using the '-copyts' input option, or using the '-itsoffset NN' option with the same value as your '-ss'
[21:48:29 CEST] <thebombzen_> before I run something wrong and say "it doesn't work," do you mean something like: ffmpeg -ss 2:00 -copyts -i file.mkv -vf subtitles=file.mkv <output>?
[21:49:37 CEST] <kepstin> yes, or 'ffmpeg -ss 120 -itsoffset 120 -i [...]' (which i think might be a bit better, since -copyts bypasses ffmpeg's cleanup processing
[21:49:48 CEST] <kepstin> )
[21:51:12 CEST] <thebombzen_> would -vsync passthrough be a "cleaner" way to not regenerate TS?
[21:52:43 CEST] <kepstin> I think it's orthogonal, since the 'start at zero' is applied before the vsync option is?
[21:52:46 CEST] <kepstin> not sure
[21:52:52 CEST] <kepstin> would have to test and see :/
[22:11:42 CEST] <CFS-MP3> Is this the correct way to use ref2scale? [0:s][0:v]scale2ref[s][v];[v][s]overlay[vf]
[22:11:49 CEST] <CFS-MP3> scale2ref I mean :-)
[22:12:09 CEST] <CFS-MP3> All other variants seems to cause problems with the output stream, but this one on the other hand does nothing
[22:21:24 CEST] <thebombzen_> kepstin: okay so I tried -itsoffset and it created a corrupt file
[22:21:37 CEST] <thebombzen_> like an mkv that was supposed to 15 seconds but was 1.8 kB in size
[22:22:05 CEST] <kepstin> weird. I guess the output muxer didn't like the timestamps not starting at 0.
[22:22:19 CEST] <kepstin> pastebin the (complete) ffmpeg command line and output please?
[22:25:10 CEST] <thebombzen_> CFS-MP3: does scale2ref have two outputs? I see that it does in the docs but that seems wrong
[22:25:32 CEST] <thebombzen_> also it says it uses the reference video as the basis. Try putting [0:v] as the input first
[22:46:34 CEST] <codespells> Need to make a 1min sample of my source video. Can I crop a video to 1 min and -c:a -c:v the codecs?
[22:47:27 CEST] <furq> yes but it probably won't end up at exactly one minute
[22:48:07 CEST] <furq> if you're copying streams then you can only cut on keyframes
[22:48:39 CEST] <codespells> thats ok. Just need a short sample that I am using with AviSynth with.
[22:49:04 CEST] <Kadigan> Then cut it a bit wider than 1min (slack on both ends) and You'll be fine.
[22:50:35 CEST] <codespells> So how would I go about doing that? ffmpeg -i xxx -c copy xxx.avi but not sure about how to crop?
[22:50:56 CEST] <furq> -ss 01:23:45 -i foo -t 60
[22:51:10 CEST] <furq> or put -ss after -i for slower/more accurate seeking
[22:51:18 CEST] <codespells> thx furq.
[22:51:59 CEST] <Kadigan> -ss can also seek to specific frames, though it takes timecode as input (so fractionals), just for future reference
[22:52:49 CEST] <codespells> have a feeling the word crop is not what I am doing? Would would I google for this?
[22:53:06 CEST] <Kadigan> No, cropping is used for cutting a part of the frame
[22:53:11 CEST] <Kadigan> (so smaller subset of an image)
[22:53:26 CEST] <Kadigan> You want cutting, I assume, or even maybe extracting.
[22:53:36 CEST] <Kadigan> (try 'split' as well)
[22:53:44 CEST] <Kadigan> What other issue are You having?
[22:54:17 CEST] <Kadigan> (also, s/fractionals/fractions)
[22:55:08 CEST] <codespells> well for one to get MT working with temporal denosing without a fat mem leak =P
[22:55:40 CEST] <Kadigan> I'm sorry, MT?
[22:55:42 CEST] <furq> avisynth has a range selection function doesn't it
[22:55:44 CEST] <codespells> QTGMC stuff and my crappy plugin.
[22:55:50 CEST] <Kadigan> Ah, probably motion tracking?
[22:55:54 CEST] <furq> multithreaded avisynth
[22:55:58 CEST] <Kadigan> Ahhh...
[22:56:02 CEST] <codespells> yeah.
[22:56:06 CEST] <furq> fwiw i had much better luck with vapoursynth
[22:56:10 CEST] <Kadigan> Yeah, not my area of expertise :D
[22:56:11 CEST] <furq> it's a bit slower but also less likely to segfault
[22:56:21 CEST] <furq> which is nice when your encodes take 8+ hours
[22:56:47 CEST] <codespells> this is what I am toying with http://www.coertvonk.com/technology/videoediting/restoring-video8-hi8-10849
[22:57:14 CEST] <codespells> Hear good stuff about vapoursynth. Gonna give it a try.
[22:57:21 CEST] <furq> you'd have to rewrite that script then i guess
[22:57:51 CEST] <furq> although at least part of that is redundant with ffmpeg anyway
[22:58:14 CEST] <codespells> yeah tons of it is redundant. All the calls to the plugins etc
[22:58:35 CEST] <furq> specifically the loadsource stuff isn't needed
[22:59:26 CEST] <furq> and qtgmc can do decent denoising for you
[23:02:09 CEST] <codespells> Using QTGMC for its really nice deinterlace and a plugin that I have tried to reverse engineer for the noise called neat. Problem is that plugin is crazy advanced and the "code" I got is really hard to read
[23:02:45 CEST] <codespells> Not cracking it. Just want to see what they did with wavelets
[00:00:00 CEST] --- Thu Aug 4 2016
More information about the Ffmpeg-devel-irc
mailing list