[Ffmpeg-devel-irc] ffmpeg.log.20190417

burek burek021 at gmail.com
Thu Apr 18 03:05:01 EEST 2019


[00:25:35 CEST] <cehoyos> jamie_1: Combining --enable-shared with --static most likely makes no sense, what are you trying to do?
[01:12:11 CEST] <TheSashmo> @kepstin I was thinking about that, but had no clue where to start.....  any direction?  google is coming up short on that search... LOLOL
[03:30:14 CEST] <xyz111> Hi Guys, I wanted to validate this idea with people more knowledgeable than me: I want to create a little video editor on linux, which piggybacks on ffmpeg. Basically, the front end would be a shell issuing commands to ffmpeg. I want the software to be able to cut videos, fade videos, extract audio etc etc. Most importantly, I want to be able to see the composited results in realtime. Would all this be viable, or am I oversimplifying the proble
[03:32:54 CEST] <klaxa> i think most open-source non-linear video editors already use ffmpeg for that
[03:33:02 CEST] <klaxa> those that don't probably use gstreamer
[03:35:59 CEST] <klaxa> i don't think you can offload real-time compositing very well through the cli though
[03:46:03 CEST] <xyz111> thanks klaxa, what is usually used for realtime compositing?
[03:48:03 CEST] <klaxa> hmm, not sure, opengl? i might be way off, but as i understand it, a non-linear video editor would just use ffmpeg for decoding/demuxing and encoding/muxing, so you'd have to write your timeline which holds all the frames yourself
[03:48:13 CEST] <klaxa> that would then be displayed with whatever GUI framework you are using
[03:50:45 CEST] <klaxa> it may be a good idea to look at other software that does similar things? kdenlive and pitivi come to mind
[03:51:58 CEST] <klaxa> https://en.wikipedia.org/wiki/List_of_video_editing_software#Free_and_open-source might also be a good point to start looking?
[04:18:09 CEST] <klaxa> it feels like after watching wataten my life is now complete and i can die in peace
[04:34:05 CEST] <klaxa> oh shit, wrong channel
[04:54:33 CEST] <xyz111> thanks again klaxa - I will investigate further!
[05:05:07 CEST] <nickster> dealing with a.... weird issue here
[05:05:15 CEST] <nickster> https://youtu.be/kiRLVcEaw74
[05:05:33 CEST] <nickster> using x11grab and trying to use that as a webcam with v4l2loopback
[05:06:17 CEST] <nickster> > ffmpeg -f alsa -i pulse -f x11grab -r 15 -s 1280x720 -i :0.0+0,0 -f v4l2 /dev/video0
[05:06:38 CEST] <nickster> only thing i could get to recognize the video is obs
[05:06:52 CEST] <nickster> it records to file fine so im thinking this may be an v4l2 issue
[08:48:40 CEST] <ossifrage> Does ffmpeg support a simple ycrcb file format (like pnm) that has embedded image dimension data?
[08:52:44 CEST] <JEEB> ossifrage: y4m and NUT are usually utilized for raw video without or with timestamps
[08:52:52 CEST] <JEEB> y4m is a relatively simple, constant frame rate thing
[08:54:48 CEST] <ossifrage> JEEB, that would work, thanks
[08:58:35 CEST] <ossifrage> JEEB I never managed to get my DTS timestamps to work right when trying to generate fmp4, ended up punting and switching to rtsp until I can come back and look at it
[09:05:28 CEST] <seastar> hello, when I configure ffmpeg with --enable-libopenh264 parameter on windows using either msys2 or mysys, I get the following error: ERROR: openh264 not found using pkg-config
[09:06:07 CEST] <seastar> how can I configure ffmpeg with --enable-libopenh264 parameter on msys2?
[09:06:28 CEST] <seastar> could anybody please help me?
[09:27:11 CEST] <JEEB> seastar: look at ffbuild/config.log ; it should show you what failed in the configure check
[09:38:54 CEST] <seastar> gcc -D_ISOC99_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -U__STRICT_ANSI__ -D__USE_MINGW_ANSI_STDIO=1 -D__printf__=__gnu_printf__ -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 -DPIC -std=c11 -fomit-frame-pointer -ID:/msys64/mingw64/include -LD:/msys64/mingw64/lib -c -o /tmp/ffconf.mFXThrca/test.o /tmp/ffconf.mFXThrca/test.c
[09:38:54 CEST] <seastar> D:/msys64/tmp/ffconf.mFXThrca/test.c:1:10: fatal error: wels/codec_api.h: No such file or directory
[09:38:54 CEST] <seastar>  #include <wels/codec_api.h>
[09:38:54 CEST] <seastar>           ^~~~~~~~~~~~~~~~~~
[09:38:54 CEST] <seastar> compilation terminated.
[09:38:56 CEST] <seastar> ERROR: openh264 not found using pkg-config
[09:40:46 CEST] <seastar> JEEB: config.log file contains error lines above
[09:42:36 CEST] <JEEB> either you're lacking a header, the pkg-config pc file doesn't contain the correct location (or you haven't defined a sysroot) or openh264 changed their API
[09:45:28 CEST] <seastar> I copied the pc file to another location so that pkg-config command can show location of the library, but I still get the same message
[09:46:22 CEST] <seastar> should I define sysroot on msys2? how can I verify it?
[09:48:23 CEST] <JEEB> changing the location of the pc file does not change the result
[09:48:58 CEST] <JEEB> if you defined a custom --prefix when building openh264 that should have added the needed -I/-L
[09:49:01 CEST] <JEEB> into the pc file
[09:49:02 CEST] <JEEB> OR
[09:49:14 CEST] <JEEB> if you are utilizing a reusable sysroot
[09:49:22 CEST] <JEEB> then you will have to tell pkg-config about it
[09:49:23 CEST] <JEEB> :P
[09:49:37 CEST] <JEEB> which is PKG_CONFIG_SYSROOT_DIR as an env var
[09:49:44 CEST] <JEEB> if I recall correctly
[09:56:01 CEST] <seastar> openh264 doesn't have configure file, and I didn't change prefix line of its makefile.
[09:56:21 CEST] <seastar> PKG_CONFIG_SYSROOT_DIR variable is undefined
[09:58:06 CEST] <seastar> I also tried to change prefix line, and to use -L and -I parameters to configure ffmpeg, but the result hasn't changed
[09:59:24 CEST] <seastar> I mean I built openh264 with another prefix to provide with it to ffmpeg using -I and -L parameters
[09:59:39 CEST] <seastar> but nothing has changed. I got the same error
[10:06:48 CEST] <logicWEB> I'm trying to encode video with film grain in it. I know the source video is pristine, because it's the output of a timeline in kdenlive. kdenlive does its video rendering using the MLT framework, which, as I understand it, ultimately wraps ffmpeg and the related encoding libraries at its core. so, I have this video to which I am _adding film grain_ using a kdenlive filter, and I'm trying to produce an output file -- at any cost -- t
[10:07:40 CEST] <logicWEB> based on tweaking after reading various forum posts, I am currently using settings: preset=veryslow g=120 crf=1 tune=film aq-strength=1.8
[10:08:52 CEST] <logicWEB> this is _pretty good_, but it's not perfect -- the film grain does sort of a global shift about every 5-7 frames, and otherwise is kept the same frame-to-frame with a little bit of flickering
[10:09:22 CEST] <logicWEB> does anyone have any advice on how to tweak the settings in order to actually preserve the unique grain in every frame? framerate is ntsc-film in case that matters
[10:09:55 CEST] <logicWEB> I was wondering whether it might be an option to disable P and B frames entirely?
[10:13:11 CEST] <logicWEB> complicating matters is that I'm not sure if my video encoder options are being passed through to the underlying encoder by MLT -- I tried "keyint=1 weightp=0 weightb=0 bframes=0 scenecut=0 no-open-gop=1" and it didn't seem to make a difference
[10:13:32 CEST] <logicWEB> (that is the syntax MLT uses for other options)
[10:15:28 CEST] <redliondj> hi there does any1 know why am I getting the following error ?
[10:15:32 CEST] <redliondj> Error while decoding stream #1:0: Invalid data found when processing input
[10:16:18 CEST] <logicWEB> okay another complicating factor is that the Grain filter in kdenlive might not actually be producing unique grain per frame, I just jacked up the grain amount so I could see it easily and stepped through the early video frame by frame and it doesn't appear to be updating every frame, so that certainly doesn't help! :-/
[10:18:01 CEST] <logicWEB> so maybe the video encoding is doing exactly what I want and the source isn't what I expect, guess I need to go sort this out with the kdenlive folks first :-P
[10:21:55 CEST] <redliondj> https://pastebin.com/PsR5KcZ9
[10:22:14 CEST] <redliondj> I am tring to add a turkish subtitle
[10:22:38 CEST] <redliondj> I tried to change my locale settings bw exporting LC_ALL and LANG
[14:59:08 CEST] <Zgrokl> I currently save a stream from dvb and sometime the sound got desync with video with the error : Non-monotonous DTS in output stream 0:0;
[14:59:24 CEST] <Zgrokl> since it's random it means that it source that is fucked up or not ?
[15:09:10 CEST] <JEEB> Zgrokl: use something like multicat to dump the input and then inspect problematic points with DVB Inspector or so.
[15:09:39 CEST] <JEEB> multicat has examples in its repo on how to set up a window of X hours or whatever for dumping
[17:37:09 CEST] <redliondj> hi I am tring to add a turkish subtitle but I get following issue
[17:37:11 CEST] <redliondj> https://pastebin.com/PsR5KcZ9
[17:47:40 CEST] <nickster> oh hey i got a legit ffmpeg question now
[17:47:47 CEST] <nickster> fixed the weird v4l2 issue though
[17:48:03 CEST] <furq> redliondj: did you try using -sub_charenc
[17:48:12 CEST] <nickster> anyway, what about recording input from certain applications as a "virtual" microphone?
[17:48:26 CEST] <nickster> i know how to record certain applications
[17:48:54 CEST] <furq> that's more of an alsa/pulse question
[17:48:58 CEST] <nickster> but i cant find out how to make a virtual microphone with it
[17:49:14 CEST] <nickster> https://trac.ffmpeg.org/wiki/Capture/ALSA
[17:49:42 CEST] <nickster> i can get the alsa / pulse stuff but just as there is a /dev/video0 is there a /dev/audio0 or smth?
[17:49:45 CEST] <nickster> or am i just naive
[17:50:01 CEST] <kepstin> with alsa, you can set up an aloop device and have your application output audio to that rather than to speakers (or maybe both, if you set up your alsa config appropriately)
[17:50:14 CEST] <kepstin> with pulseaudio, easiest is to capture from the output monitor device
[17:50:30 CEST] <kepstin> (that's not a single application, rather it's all the audio being output by your computer)
[17:50:38 CEST] <nickster> ye I looked into that but the issue is because this machine is a virtual machine and I haven't gotten any virtualized audio devices to work.
[17:50:54 CEST] <kepstin> aloop should work for that use case
[17:51:08 CEST] <nickster> It can record audio to file no problem, but I can't configure a loopback with any tested configuration so far.
[17:51:26 CEST] <furq> there's a pulseaudio loopback module that should do what you want
[17:52:12 CEST] <kepstin> on a virtual machine with no sound card, pulseaudio should start with a dummy device by default, and you can just record off the monitor of that
[17:54:13 CEST] <furq> 16:49:42 ( nickster) i can get the alsa / pulse stuff but just as there is a /dev/video0 is there a /dev/audio0 or smth?
[17:54:17 CEST] <furq> there was back in the oss days
[17:54:24 CEST] <furq> i guess this is still a thing on freebsd
[17:54:46 CEST] <furq> there are alsa compat modules that reinstate /dev/dsp etc but i don't think they'll help you much
[17:57:00 CEST] <nickster> my goal is to get hangouts to recognize the loopback
[17:57:10 CEST] <nickster> its not behaving thoough.
[18:00:06 CEST] <nickster> most all things im finding say to edit the recording device and enable loopback
[18:00:24 CEST] <nickster> but that isnt an option for me
[18:00:31 CEST] <nickster> i may ask alsa/pulse
[18:00:34 CEST] <nickster> thanks for the help though
[18:14:11 CEST] <redliondj> furq I looked in the documentation but could not find any ifno on the option
[18:14:58 CEST] <redliondj> I remember ahving seen something like this when using special filters that "hardburns" the sub
[18:19:44 CEST] <furq> redliondj: -sub_charenc ISO8859-9 -i foo.srt
[18:19:49 CEST] <furq> or whatever the subtitle encoding is
[18:22:38 CEST] <redliondj> ok furq
[18:25:29 CEST] <redliondj> thx that seems to have done the trick
[20:16:48 CEST] <friendofafriend> ls -l
[20:17:00 CEST] <friendofafriend> Oh no.  Is the trac.ffmpeg.org site down?
[20:17:40 CEST] <ChocolateArmpits> doesn't work for me as well
[20:18:33 CEST] <friendofafriend> I never realized how totally helpless I am without it.
[20:19:02 CEST] <zeromind> just slow for me, takes ~1min to load per page
[20:19:27 CEST] <ChocolateArmpits> friendofafriend, try web.archive.org for cached pages
[20:19:58 CEST] <friendofafriend> Yeah, no results.  https://web.archive.org/web/*/http://trac.ffmpeg.org/wiki/Encode/H.264
[20:21:20 CEST] <ChocolateArmpits> try https :)
[20:21:26 CEST] <ChocolateArmpits> Or http://archive.fo/epBXJ
[20:21:55 CEST] <ChocolateArmpits> through web archive should provide links to other pages as well
[20:22:07 CEST] <friendofafriend> Oh sorry, I thought it was just HTTPS-Everywhere giving me that address.  http works great.  Thank you so much.
[20:22:44 CEST] <friendofafriend> Is there an archive of trac somewhere?  I'm referencing it enough that I should probably have it offline.
[20:22:55 CEST] <furq> it's back up now
[20:23:12 CEST] <furq> it was actually timing out for a minute there
[20:23:36 CEST] <friendofafriend> I don't know, I'm still heckuva long load times.
[20:24:08 CEST] <furq> maybe not then
[20:36:59 CEST] <friendofafriend> Maybe we could pass around a mirror of trac in a torrent.
[20:39:17 CEST] <furq> i'm sure it's backed up
[20:42:17 CEST] <friendofafriend> I just don't want to mirror it without permission.  I know that's not very nice.  :)
[22:28:00 CEST] <budRich> hello, something is acting up when im trying to make a screenrecording, when my mic is turned on.. never had problems with this before. error code:
[22:28:24 CEST] <budRich> ALSA lib pcm_dsnoop.c:638:(snd_pcm_dsnoop_open) unable to open slave
[22:28:39 CEST] <budRich> [alsa @ 0x5582d7ad1c00] cannot open audio device default (Device or resource busy)
[22:29:22 CEST] <budRich> command used to start recroding:  ffmpeg -y -f x11grab -s 1600x900 -i :0.0+0,0 -f alsa -i default -c:v libx264 -preset ultrafast -crf 0 -acodec mp3
[22:29:56 CEST] <budRich> works when i disable the microphone in pavucontrol..
[22:30:32 CEST] <budRich> it's like the mic is already being used.. can i see what's using it somehow?
[22:30:47 CEST] <another> try using pulse instead of alsa
[22:31:17 CEST] <budRich> another: just replace -f alsa with -f pulse?
[22:33:23 CEST] <budRich> another, thanks that worked, but still strange, i have been using the same command for years never any issues, i get the feeling, something is not right with my where my microphone..
[23:56:46 CEST] <friendofafriend> Hello.  I've got this MPEG-TS stream from a live source, works great but over time there's more and more delay.  Is there a way to reduce the amount of delay clients experience after watching for a long time?
[00:00:00 CEST] --- Thu Apr 18 2019


More information about the Ffmpeg-devel-irc mailing list